[comp.ai.philosophy] Emergent Properties

smoliar@vaxa.isi.edu (Stephen Smoliar) (09/27/90)

In article <3918@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim
Ruehlin, Cognitologist domesticus) writes:
>In article <26FA3460.1C7D@marob.masa.com> cowan@marob.masa.com (John Cowan)
>writes:
>>
>>People certainly do abuse the term "emergent", but it does have a definite
>>meaning.  An emergent property is a property of a system that cannot be
>>accounted for by the properties of the system components, relative to some
>>level of explanation.
>
>This sounds like "emergent = I don't know".  Your definition I agree with,
>but I don't think it buys us anything.

If we approach it properly (rather than using it as a euphemism for our own
ignorance), it offers the possibility of some intellectual hygiene.  Let me
return, once again, to one of my favorite examples:  the Darwin automata
being investigated by Gerald Edelman's group at the Neurosciences Institute.
The ability of a Darwin automaton to perform perceptual categorization is an
emergent property.  What this means is that one cannot point to some specific
system component and say, "Here is where the knowledge to recognize the letter
A resides."  An outside observer will be able to note that there are parts of
that automaton which exhibit similar behavior when confronted with various
presentations of that letter, but one cannot to a Newell-style knowledge level
analysis of the system.

What does this have to do with intellectual hygiene?  It is a lesson to remind
us that much of artificial intelligence has run aground by virtue of our
insistence on asking ill-formed questions.  It confronts us with the
possibility that, for example, asking for a set of necessary and sufficient
conditions which will enable some kind of decision logic sitting behind a
retina to recognize the letter A may be one of those ill-formed questions.
This is not to say that it is giving us any answers.  However, when we are
having trouble finding answers to our questions, often we would do well to
question those questions.  The study of emergent properties provokes us to
consider how some of those questions might be reformulated into once which
might be more accommodating.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

n025fc@tamuts.tamu.edu (Kevin Weller) (09/27/90)

In article <3918@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>     jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>
>This sounds like "emergent = I don't know".  Your definition I agree with,
>but I don't think it buys us anything.   People usually throw this term
>around as if it means something when it really means "we don't know how
>this happens, only that it does".  I've seen "emergence" used to try to 
>"explain" things, but how can you explain something using a term that
>means "unknown"?
>
>- Jim Ruehlin
>
>p.s. I'll be posting all other responses to this conversation to
>comp.ai.philosophy.

No, emergent DOES NOT mean "We don't know how, but it happens."  Let me try
out a more elaborate definition on you.

There are some highly ordered systems in nature with properties which can
have no explanation that is solely dependent on the properties of the
individual components of the system.  That is NOT the same thing as saying
that we don't know how to explain them at the component level (although we
may not [yet] know how to explain them at any level); it says that a pure
component level explanation can *never* be complete, that we must also
study *the way the system is put together* to come to an adequate
explanation of the phenomenon.

Paul Davies puts it best when he asks if a Beethoven symphony is nothing
but a collection of notes or if a Dickins novel is nothing more than a
collection of words (*).  On one level of description, the novel is a
collection of words, but is this all we need to know in order to fully
appreciate it?  There is so much more depth to be found if we only step
back and take in the bigger picture!  This is the origin of the phrase "the
whole is greater than the sum of its parts."

The same principle applies to the phenomenon of life, and by extension, to
intelligence.  Although I am most certainly alive, the individual atoms
that I am made of can hardly be called living.  Life is an _emergent
property_ of the complex and highly organized way in which living systems
are put together, and any serious study which tries to explain the
higher-level aspects of life by purely reductionistic biochemistry is
doomed to failure (not to say that biochemistry is useless; that's the
opposite error, and I don't want to make that mistake either).  Your
confusion is undoubtedly related to the largely reductionistic approach
science has historically taken, but the new physics seems to require a more
"holistic" (now there's an often-abused word if I ever saw one, but it has
its valid applications) approach.  Some problems can be solved only by
putting the pieces together.

The human brain is one of the most (if not *the* most) complex organized
system presently known to exist.  The complex patterns of operation in this
highly organized system are the physical expressions of intelligent
information processing.  As others have stated in previous articles, my
individual neurons may be no more intelligent than those of, say, an
earthworm, so we cannot appeal to neuron physiology exclusively to explain
my intelligence (assuming you believe I am intelligent, of course :-) ).
They are part of the total explanation, but the rest is due to the properly
ordered construction of the components into a working whole.

Human beings are truly moving collections of atoms, but not *merely* moving
collections of atoms.  Magic is not invoked here.  In fact, this
functionalistic understanding precludes any need for a magical life-aura at
all (not to say that there isn't any such thing, but by Ockham's Razor, it
becomes an unnecessary annex to the concepts of life and intelligence).

-- Kevin L. Weller  (philosopher, computer programmer, etc.)

(*) Davies, Paul.  _God and the New Physics_.  New York: Simon, 1983.
	And no, this is not another _Tao of Physics_ Shirley MacLain-type
(sp?) book!  Davies gives an essentially unbiased discussion of the impact
of the new physics on modern religious AND scientific thought.  I would
recommend it to anyone interested enough to pursue the topic in earnest.

cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) (09/27/90)

I have taken the liberty of cross-posting some of these comments to the
mailing list for Cybernetics and Systems, where we deal with issues of
emergence in complex systems all the time.  Blurb follows.

============================================================================

     ANNOUNCING FORMATION OF A MAILING LIST FOR SYSTEMS AND CYBERNETICS
     
     An electronic mailing list dedicated to Systems Science and Cybernetics 
     is currently in operation on the SUNY-Binghamton computer system.  The 
     list is commited to discussing a general understanding of the evolution 
     of complex, multi-level systems like organisms, minds, and societies as 
     informational entities containing possibly circular processes.  Specific 
     subjects include Complex Systems Theory, Self-Organizing Systems Theory, 
     Dynamic Systems Theory, Artificial Intelligence, Network Theory, 
     Semiotics, fractal geometry, Fuzzy Set Theory, Recursive Theory, computer 
     simulation, Information Theory, and more.
     
     The purposes of the list include: 1) facilitating discussion among those 
     working in or just interested in the general fields of Systems and 
     Cybernetics; 2) providing a means of communicating to the general 
     research community about the work that Systems Scientists and 
     Cyberneticians do; 3) housing a repository of electronic files for 
     general distribution concerning Systems and Cybernetics; and 4) providing 
     a central, public directory of working Systems Scientists and 
     Cyberneticians.  The mailing list can store or transmit notes and 
     messages, technical papers, references, calls for papers, computer 
     programs, and pictures and diagrams.
     
     The list is coordinated by members of the Systems Science department of 
     the Watson School at SUNY-Binghamton, and is affiliated with the 
     International Society for the Systems Sciences (ISSS) and the American 
     Society for Cybernetics (ASC).  The list is open to everyone, and we 
     currently have over three hundred members from America, Canada, and 
     Europe.  Our 
     subscribers are from both academia and industry, and while many are 
     active researchers, others are just "listening in".  We share in an 
     exciting, ongoing, multi-way conversation about many aspects of Systems 
     and Cybernetics.  Different levels and kinds of knowledge and experience 
     are represented.
     
     We invite all to join the discussion.  To subscribe, you need a computer 
     account with access to one of the international networks (e.g. BITNET, 
     USENET, ARPANET, INTERNET, CSNET).  Send a file containing only the line: 
     'SUB CYBSYS-L Your Full Name' to the list server at the address 
     LISTSERV@BINGVMB.BITNET. 
     
     Once subscribed, please post a message to the list itself at the address 
     CYBSYS-L@BINGVMB.BITNET.  In the message, include your name, affiliation, 
     and a brief description of your work and/or interest in the fields of 
     Systems and Cybernetics. 
     
     List moderator: CYBSYS@BINGVAXU.CC.BINGHAMTON.EDU 
     
     Author: Cliff Joslyn, CJOSLYN@BINGVAXU.CC.BINGHAMTON.EDU
     
  


-- 
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjoslyn@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Box 1070, Binghamton NY 13901, USA
V All the world is biscuit shaped. . .

byland@iris.cis.ohio-state.edu (Tom Bylander) (09/27/90)

In article <15132@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>... one cannot point to some specific
>system component and say, "Here is where the knowledge to recognize the letter
>A resides."  An outside observer will be able to note that there are parts of
>that automaton which exhibit similar behavior when confronted with various
>presentations of that letter, but one cannot to a Newell-style knowledge level
>analysis of the system.

As I understand it, a knowledge-level analysis of a system specifies
what knowledge can be ascribed to the system based on its external
behavior.  There is no problem in ascribing "the knowledge to
recognize the letter A" to machines that actually recognize the letter
A, no matter how the machines are constructed.

Talking about the knowledge within components of a system is what
Newell called a "mixed model" (which is elaborated upon by Sticklen's
JETAI article "Problem Solving Architecture at the Knowledge Level").
The structure of the components and their interaction are described at
the symbol level, and the components themselves are described at the
knowledge level.  In this style of analysis, one must be careful.
Knowledge can be ascribed to a component only if the component by
itself *behaves* as if it has the knowledge.  Merely having a
representation of the knowledge is insufficient because, for
appropriate behavior to occur, some other component is needed to use
the representation.

So I would reword your first claim quoted above as follows: "One
cannot point to some specific system component and say, `Here is where
the knowledge to recognize the letter A is *represented*'".  I don't
why this kind of situation is so surprising because it is true for
most ordinary programs.  For example, a sorting program "knows" that
less-than is transitive, but, for typical sorting algorithms, it is
not possible to point out a "component" that represents this
knowledge.  (Knowing transitivity would appear to be an "emergent"
property of sorting algorithms.)

Finally, I don't think you want to claim that "one cannot do a
Newell-style knowledge level analysis" of such systems or their
components.  Certainly, it might be difficult to determine how the
component interaction results in recognizing the letter A, but this is
a problem of symbol-level analysis.  With regard to ascribing
knowledge to components, I understand why it is difficult (because it
requires considerable experimentation and analysis), but not why it is
impossible.

						Tom Bylander

david@hp-ses.SDE.HP.COM (David McFadzean) (09/27/90)

I would like to add another, simpler example of an emergent property
that I'm sure most of us would find familiar. (This is originally due to
Hofstadter from _Metamagical Themas_ which, incidentally, I would
recommend highly to all comp.ai.philosophers.)

Say you are on some kind of time-sharing system or LAN and you notice
that whenever the number of users gets to be 12 or above, the disks
start thrashing badly. Would you go to your local sysadmin and ask her
to change the max-user-count parameter from 12 to 20? Not very likely
because it's obvious that the number 12 (as a max-user-count) is not
contained in any memory location in the system where it can be accessed
and modified; it's an emergent property of the dynamics of the system.

As for the human brain being the most complex organized system known to
exist, I would say that the system of all human brains that we call
human society is more complex (though calling it organized might be
stretching it. :) If this is true, could individuals be considered
analogous to neurons with respect to nations? Can countries be
considered sentient at some higher level?

--
David McFadzean
HP Calgary Product Development Centre

david@hpcpdca.calgary.hp.com or david@hp-ses.sde.hp.com

p.s. Can anyone tell me where I can find some more information on the
Darwin automata being investigated by Gerald Edelman's group at the
Neurosciences Institute? aTdHvAaNnKcSe.

jkrieger@bucsf.bu.edu (Josh Krieger) (09/28/90)

   In article <3918@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
   >     jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
   >
   >This sounds like "emergent = I don't know".  Your definition I agree with,
   > ....

A valuable point brought up in one of our NN classes is:
"When is it a virtue to make (representations of) information explicit"

Need one explicitly denote each piece of information and each subpiece
in a device (such as the brain) or can information be stored implicitly
(emergent information).

Just a bit of food for thought.

+ Josh +

jmc@Gang-of-Four.usenet (John McCarthy) (09/29/90)

I'm suspicious that "emergent" is just a fancy term for the fact
that any system has some properties that are not properties of
the components.  Let's take a trival example.  Suppose we make
an EXOR circuit out of AND gates and inverters.  3 AND gates and
3 inverters will do it.  Does the fact that the circuit computes
EXOR count as an emergent property, since none of the components
computes EXOR?  I suspect the users of "emergent" want to suggest
something fancier.  But what?

n025fc@tamuts.tamu.edu (Kevin Weller) (09/29/90)

In article <1990Sep27.185805.21493@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>
>  Some of the recent posts have criticised the term "emergent property"
>  as a euphamism for "we don't understand" and some have defended the
>  term by examples of the application of the term and some have tried
>  to justify the term as a valid one more abstractly.
>
>  If I assemble a device from wheels, pedals, metal tubes and such,
>  and it happens to become the most efficient transportation device
>  around, is that an emergent property of the parts?  I doubt that
>  defenders of the term would like it to be so.  But why not?  Probably
>  because the transportation property was a goal of the design process
>  that controlled the assembly.

Your example is indeed in harmony with my definition of emergence
since, for instance, a metal tube can hardly be said to have an
"efficient transportation device" property in miniature!  The property
is of the whole system, not of the parts.  This is not to say that the
properties of the tube at its own description-level (or at lower
levels still) don't play a part; that would be the opposite one-sided
view.  Whether the transportation property was a design goal or not is
simply beside the point, just as the ultimate goal of AI researchers
has no destructive effect on the intelligence in any systems they
design/build (other factors can have destructive effects, of course).
Emergent properties are simply properties, independent of the
designer's (or designers') plans.  Your device is probably not the
BEST example of emergence because the "transportation device"
description-level is not that much higher (more complexity-abstract)
than the component level(s), but it does serve as a simpler analogue
of more complex systems.

>                                 Now suppose a Venusian engineer viewed
>  this as a process of putting wheels, metals tubes, pedals, and an
>  engineer, and a few tools, in a room.  These "parts" may well have
>  an emergent property, from the perspective of the Venusian, since
>  they had no expectation of a transportation function arising from
>  the collection of parts.  (Yes the engineer is a part in this view.)
>  I identify this as an emergent property because I believe it would
>  satisfy most of the ad hoc definitions I have heard.  

You are correct again regarding the emergence in this example, only
from another level of description (that of the Venusian).

>  I do not expect most supporters of the "emergent property" term to
>  like this use of the term.  They will not like it (I am guessing of
>  course) becuase they will feel that they can identify the source
>  of the property which has emerged ... but of course the Venusian,
>  having utmost contempt for the large water based carbon compound in
>  with the metal parts, will not be able to identify the source and
>  the "emergence" is viewed from that creatures perspective.

Your guess (concerning my "not liking" this usage) is wrong.  Your
illustration is perfectly compatible with my understanding of
emergence, although as I said before, it is not the most useful
application of the term.  However, it does bring up one aspect of
emergent properties that I failed to address in my original posting
(sorry).

There are usually many different ways to describe an organized system,
these different ways corresponding to different levels of abstraction,
called description-levels.  For example, the collective behavior of an
ant colony is considerably more complex and purposive than that of the
individual ants, so at one description-level, we have a bunch of ants
which are each behaving on the relatively simple level of programmed
automata, while at the colonial level of description the whole is
acting for a larger purpose, and often doing a fantastic job of it.
There may be other examples more effective for you in getting the
point across, but to keep this posting from getting TOO long, I'll put
references at the end of the article.  The key points are: physical
systems in the universe vary in their complexity and organization, the
simpler systems requiring much fewer (and lower) description-levels
than the more ordered ones; and emergence is always RELATIVE to
whatever description-level is being considered, so that the more
removed a property is from those of the system's constituent parts,
the more useful the term is in describing the synergistic effect of
their ordered combination.  THERE IS NO CONFLICT between the hardware
and the software description-levels.  They are complementary.  I
suspect that if we ever manage to build an artificial intelligence,
the construction will involve some combination of knowledge from
several levels of description (physics, chemistry, neurophysiology,
cognitive science, etc.).

>  A second example: if you put large pine forrests, rabbits and foxes
>  together in northern Canada, you will get a 10 year cycle of boom
>  and bust in the populations of rabbits, foxes and young pine trees.
>  (Rabbits LOVE to eat pine needles, far more than carrots.)  Is this
>  pattern an emergent property?  From the perspective of a naive and
>  innumerate individual, the answer is certainly yes.  The cycle is
>  there, it was not predicable (by them) and it is not easy to 
>  identify the source in myopic analysis of rabbits, foxes, or pines.
>  From the perspective of an ecologist or someone versed in simple
>  dynamical systems theory, it is not an emergent property.  It can
>  be predicted, modeled, and well explained, based on properties of
>  the constituent elements, e.g. kilocalories needed, supplied, 
>  gestation period, etc.
>
>  Again, I would expect advocates of the "emergent property" term to
>  be somewhat bothered by this situation, but I believe it is because
>  they will tend to automatically associate with the technically
>  astute view of the dynamics.  Two hundred years ago, ecologists
>  knew it happenned but did not understand at all why.
>
>  But, it was not called "an emergent property of rabbits, foxes and
>  pines".  It was simply an unexplained experimental observation.  That
>  does not have nearly the same gloss, but it is more accurate.  By
>  saying that X is an emergent property of {A,B,C...} and by providing
>  some sort of definition for the term "emergent" an unsubstantiated
>  conclusion has been reached.  A few people seem to make this part
>  explicit in the use of the term, directly or indirectly saying the
>  explanation WILL NOT come from reductionist methods, not simply that
>  it HAS NOT come from that source.  Certainly, in the case of neural
>  systems (real or synthetic) it is not known that a suitable means
>  of reductionist explanation will not be found.  Just that it has not
>  been found.

On the contrary, I am not bothered by it at all.  Your example is
simply not one of emergence.  I don't claim that ALL such phenomena
are emergent.

What I am trying to say is that SOME phenomena CAN NOT be explained
SOLELY on the basis of component properties.  If this weren't so, we
would be capable of explaining every phenomenon using microphysics
alone if given a total knowledge of all physical laws (or perhaps the
much-sought-after Grand Unified Theory).  But you say, of course,
since all the other sciences derive from the most fundamental (i.e.,
REDUCED) law(s) of nature!  Then, would you be willing to say that we
should abandon all studies in chemistry, biology, meteorology, and so
on in favor of physics alone?  It would explain everything, wouldn't
it?  ONLY AT ITS OWN LEVEL OF DESCRIPTION!  Think about it.

Individual memories are not perfectly localized in the brain, but
rather, they are stored as "tendencies" of the overall network to
approximate patterns functioning for memories (a distributed memory
system).  My having a memory is represented on the hardware
description-level by a pattern of electro-chemical processes in my
brain.  You can scrutinize the individual operating neurons to your
heart's content and never see the full significance of the event.
It's the *pattern* that counts on this description-level, and you must
step back and look at the whole system to see the operation.

Note that I am not advocating any kind of dualism here.  Different
levels of abstraction can all be referring to the same stuff, so if
anything, abstraction-level is more of a monistic concept.  It is
philosophically classified as a form of materialism, but, of course,
any good physicist knows the difficulty in defining "matter" and
distinguishing materialism from "spiritism" of a sort.

>  (So-called "chaotic systems" are an interesting counter point, since
>  there is some analytic evididence that there are classes of systems
>  for which it is not possible to predict specific behavior, based on
>  ANY measurement of the system.  But in these cases, it IS often
>  possible to characterize the sorts of behavior that the system is
>  capable of.  I take it that emergent properties deal in the currency
>  of behavior characterizations, not specific predictions, so the
>  behavior of a chaotic system is not an "emergent property".)

I'm not clear on what you mean by "currency of behavior
characterizations," but I can say that chaotic systems are composed of
components that obey physical laws just like components of ordered
systems.  However, the only true high-level properties of such a
system to count would probably be its very chaos and its range of
possible behaviors.

>  People studying intelligent systems seem to operate as if they already
>  know what the suitable "atoms" of the systems are.  Since they
>  are unable to explain the observations based on properties deduced
>  from these atoms, they reach for terms such as "emergent properties"
>  rather than doing good science and looking to reformulate the basic
>  hypothosis in new ways.  Hiding behind a pseudo-science of "emergent
>  properties" will probably delay the real struggle: to find more
>  suitable analytic tacts and more suitable atoms to form the 
>  foundation of a "proper" scientific explanation.
>
>  I doubt advocates of "emergent properties" will like that either.

No, we don't really know enough about the "atoms" of intelligent
systems to build one yet.  As I said, low-level and high-level
properties are complementary and inextricably bound together in highly
organized systems, and neglecting either kind will probably result in
failure.

Your narrow definition of "good science" is slowly falling into
disfavor simply because it is no longer considered adequate for fully
explaining certain natural phenomena such as quantum events, weather,
life, intelligence, etc.  Science should still be largely
reductionistic, but there are some modern scientific problems that
need a more open-minded approach if we ever hope to make any headway
with them (scientifically).  The experimental method and
quantification are not rejected.  The essence of science is untouched.
Consult the references below if you want to see more scientific
support for my position.

>----gary----

-- Kevin

PS: Thank you for responding with such a well-thought-out article.  It
forces me to clarify my position not only for you, but for myself as
well.  One thing: I'm cross-posting this message to comp.ai.philosophy
since ours the kind of discussion that really belongs there.  If you
intend to post a new followup article, would you be so kind as to post
it in comp.ai.philosophy only?  I'm sure that everyone else in comp.ai
would appreciate that.  Thanks!


Further reading:

Bohm, David.  _Wholeness and the Implicate Order_.  Routledge & Keegan
	Paul, 1980.
	David Bohm is a physicist by profession.  This work really
drives home the holistic nature of quantum mechanics and related
disciplines.

Hofstadter, Douglas R.  _Godel, Escher, Bach_.  Basic, 1979.
	On the pitfalls of singleminded reductionism.

Peat, F. David.  _Artificial Intelligence: How Machines Think_.  Baen,
	1985.
	The final chapter discusses the explications and implications
coming out of the modern quest for a specific definition of
intelligence.

decomyn@penguin.uss.tek.com (09/29/90)

In article <JMC.90Sep28150656@Gang-of-Four.usenet> jmc@Gang-of-Four.usenet (John McCarthy) writes:
>I'm suspicious that "emergent" is just a fancy term for the fact
>that any system has some properties that are not properties of
>the components.  

Actually, this is close.  It would be more correct to say that an "emergent"
is a propery of a system that can not be extrapolated from a simple
understanding of the components of the system.  (The EXOR gate you constructed
in your example could be predicted from knowing the properties of the other
gates in the system)

My favorite example of an emergent property involves aqua regia, a combination
of hydrochloric and nitric acids.  Now, neither of the component acids will
significantly affect gold; however, the combination dissolves it easily.  This
property was not predictable from knowing the properties of either the acids
or of the gold;  it is an "emergent" result.

Brendt Hess
decomyn@penguin.uss.tek.com
Disclaimer:  Opinions?  I don't even *work* here!

kyriazis@iear.arts.rpi.edu (George Kyriazis) (09/29/90)

In article <18070001@hp-ses.SDE.HP.COM> david@hp-ses.SDE.HP.COM (David McFadzean) writes:
>As for the human brain being the most complex organized system known to
>exist, I would say that the system of all human brains that we call
>human society is more complex (though calling it organized might be
>stretching it. :) If this is true, could individuals be considered
>analogous to neurons with respect to nations? Can countries be
>considered sentient at some higher level?
>
I would agree that people are to nations, what neurons are to the human
brain.  Now the following questions arise:

Humans are extremely inconsistent and unpredictable, as opposed
to neurons or whatever else forms other organized systems.  This
increases the randomness of the system and the resulting global
behavious is not so stable to be characterized organized.  Now, here
is the flip side:  Neurons definetely cannot comprehend human
behaviour, so a human (being part of a society) cannot comprehend
the behaviour of the society.  So, even if the organized behaviour of 
the human society exists, I think we won't be able to realize its
existance!  Monitoring humanity for long periods of time will
be valuable for the understanding of the path of the human society,
and maybe predicting its future, but I don't think it's going to
get anywhere.


----------------------------------------------------------------------
  George Kyriazis                 kyriazis@rdrc.rpi.edu
 				  kyriazis@iear.arts.rpi.edu

forbis@milton.u.washington.edu (Gary Forbis) (09/29/90)

It's a minor point.

In article <ZCN%#J*@rpi.edu> kyriazis@iear.arts.rpi.edu (George Kyriazis) writes:
>Now, here
>is the flip side:  Neurons definetely cannot comprehend human
>behaviour, so a human (being part of a society) cannot comprehend
>the behaviour of the society.

I'm not sure your analogy follows.  It is possible to write a program
which when run writes its source.  There are many compression schemes.
If comprehension of society's behavior can be sufficinetly compressed
one person might be able to hold it in its entirity.

>  George Kyriazis                 kyriazis@rdrc.rpi.edu
> 				  kyriazis@iear.arts.rpi.edu

--gary forbis

kyriazis@iear.arts.rpi.edu (George Kyriazis) (09/30/90)

In article <8312@milton.u.washington.edu> forbis@milton.u.washington.edu (Gary Forbis) writes:
>I'm not sure your analogy follows.  It is possible to write a program
>which when run writes its source.  There are many compression schemes.
>If comprehension of society's behavior can be sufficinetly compressed
>one person might be able to hold it in its entirity.
>
The behaviour of the society is directly related with the experiences
of each person.  Clearly, one person cannot handle all the experiences
of every member of the society. Ok, granted, most of them are useless.
I think though that compation that emerges out of the society if none of
its members has any idea about the computation itself is purely emergent
(remember a previous article stating that the whole is 
bigger than the sum of the parts, and
also that emergent computation exists when we cannot explain it from the
parts of the complex system).  If one person can handle the intellect
of the whole society, then emergent computation does not exist; we know
where it comes from, from that society unit.


----------------------------------------------------------------------
  George Kyriazis                 kyriazis@rdrc.rpi.edu
 				  kyriazis@iear.arts.rpi.edu

cpshelley@violet.waterloo.edu (cameron shelley) (09/30/90)

  I have found the discussion of 'emergent' properties quite interesting
and would like to add some comments of my own.  My background is more in
computer science (NLU, say) and linguistics than in philosophy or 
physics and my discussion will no doubt reflect that.

  In semantics, people like Montague have created theories using the 
"principle of compositionality" (Frege's principle) which basically
asserts that (in the case of languages) the meaning of an utterance
can be arrived at by a finite series of manipulations on its component
parts.  These manipulations may be of arbitrary complexity, so long as
they are well formed.  This works well for analysing sentences or 
short discussions, but less well for language that relies heavily on
metaphor or other nebulous contexts: eg. "No man is an island", which
is quite trivial on the surface.  Montague proposed to get around this
by supplying extra rules (presumably with higher precedence than the
compositional ones) which would 'intercept' such idioms and language
and express their value directly, without further analysis.  At least,
this is my understanding.  Thus the problem would be solved by fiat,
much as the Russel paradox was solved by Von Neumann.  Whether metaphor
and the like can be considered examples of emergent properties of mind
or language is highly debatable since our knowledge of linguistic
"atoms" is so insecure.

  Btw, the definition of emergent property I'll try to stick to is
that of a property of a 'system' which is not provable from a complete
knowledge of the properties of its 'parts'.  The definiton of 'system'
and 'part' I will leave alone for the time being.

  Hofstadter mentions in _Goedel, Escher, Bach_ (GEB) that in 'chunking'
information, ie. in moving up a level of abstraction, some detail is
inevitably lost.  Someone (sorry I don't have the attributes here!) 
mentioned that the chemical properties of a mixture are not always
visible from the chemical properties of its components, such as the
solubility of gold.  It is, I think, possible to explain such effects
by moving down to a lower level of abstraction (or description), in 
this case say quantum electro dynamics, but then again the description
of the 'emergent' property also becomes more complex when moved to
that same level - which seems only fair.  Does the explanation of the
emergent property involve new 'emergent' properties at the new, lower
level?  I don't know the answer in this case, but I will assume that
its possible.  At this point, one can play "the chicken and the egg"
with levels of abstraction until one hits bottom - the axioms.  Are
the axioms of a mathematical system considered emergent properties of
- nothing?  Or in physical terms (correct me if I'm wrong), is the 
material universe an emergent property of the vacuum?  Damned if I 
know! :>  Mathematical axioms are just projections of our perceptions
of reality, or part of it, as are the 'laws' of physics.  But I'm
getting off topic...  The point is we ultimately run out of 'parts'.

  GEB also discusses that what we might consider information, is not
necessarily located in an easily definable place.  Examples given in
this group have been the nature of human memory in the brain, and the
number of users that can log into a system before it starts running
into difficulties.  Level of abstraction plays a part here too, ie.
what exactly is "a memory", or even "a user" to the computer system?
A memory is not a useful concept when dealing with fine details of
brain chemistry, but it is for us who live at a much higher level and
do not perceive the world (intuitively) in terms of exchange of electrons
and energy particles.  But does that mean 'memory' is or is not an 
emergent property?  The question is answerable 'yes or no', I believe,
when it is qualified as "emergent relative to what description level?"
If it is asked with reference to some 'absolute' framework, then I don't
see how it could be answered one way or the other.  

  Which brings me back (finally :>) to the issue of 'system' and 'parts'.
These two terms both refer to how we look at things.  If a car is a 
system, then can the design team be considered parts?  They are as
responsible, in a way, for the properties of the car as are the metal
pieces that physically make up its engine.  The question sounds absurd
in english since its speakers have a very noun-based view of the world
in which everything is a delimitable object or a process which affects
such objects (I am oversimplifying somewhat, but the trend is there).
Some languages (several north american indian ones for example) look 
on objects as being a 'slice' from a temporal process spanning the whole
lifetime of the 'object' they see.  In such a view, the designers of a
car could well be considered 'part' of it.  Similarly, Searle's
(in)famous chinese room problem rests heavily on the 'obviousness'
of the composition of a room, book, and person as an intelligent
system being an absurdity.  What I'm saying, in short, is that a 
candidate for an 'emergent' property may be a result of a 'part' which
we simply do not perceive as being present in a 'system', or that our
idea of 'system' is too narrow.  The effort
to eliminate this problem might ultimately require some reference to
a set of 'all parts' which is certainly not tractable and certainly
impossible in any axiomatic system I have heard of.

  My conclusion is that the idea of 'emergent' properties is inevitable
at any level of abstraction, but does not necessarily constitute
something unexplainable in principle.  On the other hand, the reduction
of all 'emergent' properties is a waste of time and, I believe, not
completely possible anyway.

  I am looking forward to some comments on this one!  Anything constructive
would be appreciated. :>


--
      Cameron Shelley        | "Armor, n.  The kind of clothing worn by a man
cpshelley@violet.waterloo.edu|  whose tailor is a blacksmith."
    Davis Centre Rm 2136     |
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (09/30/90)

The most naive idea of "emergence" is that of a behavior-phenomenon
which appears in observing a group of things, that cannot be predicted
from knowing all about each individual.  This is uninteresting.  A
less trivial ideas of emergence is a behavior that can't be explained
in terms of the parts and their interrelationships.  But that's too
general, because of an ambiguity of what we mean by "their
relationships."

By the way, note that "emergence" itself is not a dyadic relationshiop
between a system and its behavior.  It is a triadic relationship
between the system's structure, its behavior, and the observer's
usually incomplete understanding of the relation between the first two!

What is fascinating is the extent to which, in science, it has so
often sufficed merely to know the dyadic relationships of the objects,
just two at a time.  This is the case in Newtonian mechanics; all the
forces are simply dyadic, and one has only to sum them to find the
accelerations that determine all the trajectories.  (Perhaps there is
something slightly triadic about this, however, because the
inter-forces alone don't suffice; they must all be referred to the
same coordinates, so as to obtain the proper vector sums.)

In general relativity it is not so simple, because of each particle
affecting the geometry of space, and thus changing the distance
vectors of the other pairs.  Still, the participations of each
particle in the overall differential equation is enough to explain all
the trajectories -- assuming you have the right equation.

In quantum electrodynamics as well, I have the impression that, again,
there are no mysterious emergents, in the sense that the two-at-a-time
exchange-interactions account for everything. However, each exchange
implies a new particle, and you have to include the two-at-a-time
interactions of all of these, hence the annoying infinite series.
Also, now things are a little different, for many-particle problems,
because the equations can no longer be solved within a manifold of
fixed dimension, because they require , not in a low-order vector
space they require the dimensionality of at least the
configuration-space.  Despite all that complexity, however, one still
feels that the predictions come directly, albeit in a complicated
manner, from one's understanding of the elemetary particules and their
local interactions.  No mysterious emergents.

Returning to the Newtonian situation, we could easily enough conceive
of a universe in which certain triadic interactions had unique and
"irreducible" effects, so that one could not make predictions on the
basis only of low order interactions.  Imagine a universe that were
Newtonian, with all forces depending on linear sums of
pair-relationships; that would determine all the orbits of planets,
stars, and galaxies.  But suppose also that some capricious God
imposed one extra, arbitrary law: whenever three stars form an
equilateral triangle, then they simply disappear.  That would appear,
to a classical physicist, to be an "inexplicable emergent" -- until it
was added as a new law of nature.

The amazing thing is how rarely anything resembling an "inexplicable
emergent" has ever reigned for very long in the history of Science --
except for transient periods before better theories remove the need
for the assumption of extra, special laws.  The moral is that,
whenever you're pretty sure you are dealing with a "genuine emergent",
you're proably making a mistake that reflects the primitive scientific
culture of your time.  The longest holdout was "life", or the vital
spirit, whose reduction commenced with Pasteur (and Darwin) and was
pretty much buried with Watson-Crick.  

A present-day holdout is "consciousness", and this is well illustrated
by Penrose's dogmatic naivites.  It is no accident, I suppose, that he
does not cite the suggestions I made about consciousness in "The
Society of Mind", in which I suggest that most of the phenomena
involved are related to (limited amounts of) short term memory.  If
so, future AI machines will be much more conscious than humans are,
and may also have much less sense of mystery about it.

frank@bruce.cs.monash.OZ.AU (Frank Breen) (09/30/90)

In <ZCN%#J*@rpi.edu> kyriazis@iear.arts.rpi.edu (George Kyriazis) writes:
>...  Neurons definetely cannot comprehend human
>behaviour, so a human (being part of a society) cannot comprehend
>the behaviour of the society.  So, even if the organized behaviour of 
>the human society exists, I think we won't be able to realize its
>existance!

I thought that for e.g. some aspects of crowd behavior were fairly
predictable even though individual behaviours within a crowd were
not.  Surely this kind of thing would also apply to societies.

Also I find that I can comprehend and to a reasonable extent predict
the behaviour of more than one(!) of my friends even though together
they are more complex than I.  Of course I can't comprehend everything
about them but I have a lot of useful information.  This also would
apply to human society - You don't need to know (and I can't imagine
how you could, or would even want to) EVERYTHING about a society.

Another analogy is that no-one (as far as I know) understands all
established science and I don't think it is possible to do so, but
science still manages ok.

Frank Breen

kyriazis@iear.arts.rpi.edu (George Kyriazis) (10/01/90)

In article <3145@bruce.cs.monash.OZ.AU> frank@bruce.cs.monash.OZ.AU (Frank Breen) writes:
>I thought that for e.g. some aspects of crowd behavior were fairly
>predictable even though individual behaviours within a crowd were
>not.  Surely this kind of thing would also apply to societies.
>
>...
>
>Another analogy is that no-one (as far as I know) understands all
>established science and I don't think it is possible to do so, but
>science still manages ok.
>
Alright.  I probably should've phrased it different.  I totally
agree that some people in the society can comprehend some aspects
of group behaviour.  What I am arguing, is that there might be next
level of consiousness of our society that is not comprehensible by
human, just like human thought is incomprehensible to neurons.


----------------------------------------------------------------------
  George Kyriazis                 kyriazis@rdrc.rpi.edu
 				  kyriazis@iear.arts.rpi.edu

smeagol@eng.umd.edu (Kenneth A. Hennacy) (10/01/90)

In article <8581@helios.TAMU.EDU> n025fc@tamuts.tamu.edu (Kevin Weller) writes:
>
>Paul Davies puts it best when he asks if a Beethoven symphony is nothing
>but a collection of notes or if a Dickins novel is nothing more than a
>collection of words (*).....
>
>The human brain is one of the most (if not *the* most) complex organized
>system presently known to exist. 

The human brain could be likened to the book, 
i.e. it is nothing more than a collection of such 
and such.  To say that its the most complex system means 
something only when there is something that attaches a
meaning to it. Just the same as the book, it doesn't have
any meaning or complexity to it until one associates a 
meaning or complexity with it. 

So, we are the measure of our own complexity, i.e. some
other creature may not attach any significance at all to
what we do, "think", etc.  This creature does not have to
be a rabbit, it could be one that attaches complexity to
things which we have yet to be aware of, or appreciate.  

So, is it us or our society that creates this complexity?  I
mean, I could scribble a whole bunch of stuff like 

#$^%@$*!@#$()@#%(&#$^T@#($#()##@$!@#$(!#!@#

which to me, could mean alot, but to you means nothing,        
therefore, it would be devoid of complexity.

Ken Hennacy
 

danforth@riacs.edu (Douglas G. Danforth) (10/02/90)

In <1990Sep29.213139.2876@watdragon.waterloo.edu> cpshelley@violet.waterloo.edu (cameron shelley) writes:


>  I have found the discussion of 'emergent' properties quite interesting
>and would like to add some comments of my own.  My background is more in
>computer science (NLU, say) and linguistics than in philosophy or 
>physics and my discussion will no doubt reflect that.
     
     I would like to thank Cameron Shelley for following this line of attack
for it gives me an opportunity to "kill two birds with one stone" 
(actually that's a little too violent, so shall I say address two topics
at once?).
     The first issue is semantics and the second is emergence.  I find it
helpful to consider both of these in terms of "state".  They are both
states of the observer: the hearer of the sentence and the viewer of the
system.  Is there a single meaning to a sentence or is there just an 
elicited state in the hearer (or reader) of the sentence?  Is there
an emergent property to a system or is there simply a change of state
that the observer "decrees" to be unusual, unpredictable, and therefore
emergent from the system?
     
The universe evolves whether we understand it or not (were Newton's laws,
laws before Newton? Are they now?  Are Einstein's?).

The raising of a robot arm may or may not be emergent behavior. To one person
it may be predictible. To another it may not. The mixing of 1 part nitric
acid with 3 parts of hydrochloric acid creates something that disolves gold.
WE call that significant. But what about the fact that gold in water does not
disolve? Why isn't that significant? Its not sexy enought. Its predictable
(but only because many people have experience this fact). Was it predictable
before it was experienced? Really?
     To sharpen up the dialog we need to introduce comparisons and standards.
Such as:  X in state S and in situation Y will be deamed to exhibit emergent 
behavior if any of its actions belong to set A. Also, the meaning of a
sentence within 1 second of its revealing is the average response of native
speakers of the language given a set of choices (forced choice) OR the set of
responses (verbal, written) (free response) from that group. To ask for THE
meaning of a sentence, in my opinion, is without content (even with the
vast literature on semantics and Montague grammars). 
     I'm sure many of you have considered the meaning of a sentence and had 
it change on you in the course of its examination. Your state is changing. Is
the meaning changing? For you yes. For others? A nonsense question until they
attempt the same task.  Will the meaning of the sentence for you be the same
next year as it is now?  Probably not.
     The state of a mixture of atoms is a non-linear function of its 
configuration.  This state can change in "emergent" ways since the 
non-linearity is not just the sum of its parts.  If one is familiar with
the behavior of a specific non-linear system and a second is not does the
behavior of such a system exhibit emergent behavior. To the first? To the
second?
     Everything is in the eye of the beholder.  Its just a good thing that
we are all (more or less) cast from the same mold.
--
() Douglas G. Danforth                   EMail: danforth@riacs.edu
() RIACS M/S 230-5
() NASA Ames Research Center
() Moffett Field, CA 94035

jfbuss@maytag.waterloo.edu (Jonathan Buss) (10/02/90)

In article <8581@helios.TAMU.EDU> n025fc@tamuts.tamu.edu (Kevin Weller) writes:

>There are some highly ordered systems in nature with properties which can
>have no explanation that is solely dependent on the properties of the
>individual components of the system. ...
>
>Paul Davies puts it best when he asks if a Beethoven symphony is nothing
>but a collection of notes or if a Dickins novel is nothing more than a
>collection of words (*).  On one level of description, the novel is a
>collection of words, but is this all we need to know in order to fully
>appreciate it?  There is so much more depth to be found if we only step
>back and take in the bigger picture!  This is the origin of the phrase "the
>whole is greater than the sum of its parts."

A Dickens novel is a collection of words.  A Dickens novel being read
by someone, in a social context, with the possibility of discussing it
later, is something else.  Why should a whole be explainable as a sum
of some of its parts?  No one ever tries to explain computers in terms
of only AND gates.

Jonathan Buss

n025fc@tamuts.tamu.edu (Kevin Weller) (10/03/90)

In article <3499@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>....................  But suppose also that some capricious God
>imposed one extra, arbitrary law: whenever three stars form an
>equilateral triangle, then they simply disappear.  That would appear,
>to a classical physicist, to be an "inexplicable emergent" -- until it
>was added as a new law of nature.

True, but I might question how declaring a new 'law' "explains"
anything.  It does from the practical standpoint of relative levels of
abstraction, but it might not in any "absolute" sense.

My definition of emergence has undergone a slightly pragmatic change.
See my reply to Jonathan Buff for more on this.

Regards -- Kev

n025fc@tamuts.tamu.edu (Kevin Weller) (10/03/90)

In article <1990Oct1.002909.21899@eng.umd.edu> smeagol@eng.umd.edu (Kenneth A. Hennacy) writes:
>In article <8581@helios.TAMU.EDU> n025fc@tamuts.tamu.edu (Kevin Weller) writes:
>The human brain could be likened to the book, 
>i.e. it is nothing more than a collection of such 
>and such.  To say that its the most complex system means 
>something only when there is something that attaches a
>meaning to it. Just the same as the book, it doesn't have
>any meaning or complexity to it until one associates a 
>meaning or complexity with it. 
>
>So, we are the measure of our own complexity, i.e. some
>other creature may not attach any significance at all to
>what we do, "think", etc.  This creature does not have to
>be a rabbit, it could be one that attaches complexity to
>things which we have yet to be aware of, or appreciate.  
>
>So, is it us or our society that creates this complexity?  I
>mean, I could scribble a whole bunch of stuff like 
>
>#$^%@$*!@#$()@#%(&#$^T@#($#()##@$!@#$(!#!@#
>
>which to me, could mean alot, but to you means nothing,        
>therefore, it would be devoid of complexity.
>
>Ken Hennacy

I don't quite agree with your definition of complexity.

Firstly, different levels of complexity are relative to EACH OTHER.
If something (brain, language, whatever) lacks enough components
("bits" in computer terminology) to represent much information
relative to a predetermined "standard" (decided on relative to the
question asked), then it is simply not complex enough to hold the
information we are looking for in it.  On the other hand, complex
objects have the potential to represent many different messages in
many different contexts depending on who is "reading" them.
Complexity is independent of whatever it may represent.  If a
relatively simple object "represents" too much for itself, then the
excess information must be contained within the perceiver's more
complex "brain."  In other words, you can't randomly "attach" greater
complexity to objects relative to other objects.

Secondly, you are confusing complexity with specific organization,
both of which should be present in some degree for a given level of
intelligence.  Of course, our scales need not be absolute to be
useful.

I have modified my original definition of emergence somewhat.  See my
response to Jonathan Buss for a better discussion of that.

-- Kev

n025fc@tamuts.tamu.edu (Kevin Weller) (10/03/90)

In article <1990Oct1.212639.24730@maytag.waterloo.edu> jfbuss@maytag.waterloo.edu (Jonathan Buss) writes:
>A Dickens novel is a collection of words.  A Dickens novel being read
>by someone, in a social context, with the possibility of discussing it
>later, is something else.  Why should a whole be explainable as a sum
>of some of its parts?  No one ever tries to explain computers in terms
>of only AND gates.
>
>Jonathan Buss

An excellent point, which really brings home the shortcomings of
English when trying to define "system" at relative levels of
abstraction.  In fact, you have forced me to reconsider my original
definition of emergence [ I bet you all thought I was perfect :-) ].

Perhaps we should be thinking of emergence as a practical approach to
solving certain problems rather than a metaphysical truth.  This
pragmatic definition is, I think, what the originators of the term
meant for it.  The whole purpose of the concept was to try and
demystify the collective behavior of highly organized systems in
pursuit of a non-dualistic explanation for mind.  It was NOT intended
to offer some kind of pseudo-scientific support for philosophical
dualism at all!  It can be used as a razzle-dazzle word, but it is
better used as a pragmatic term for collective behavior.

Your solution for my problem is a perfect example of the practical
value of the emergence idea.  You had to resort to a larger system
(higher on our scale of abstraction) of book, reader, and social
context to explain the book's power to generate interest and
discussion.  Some books simply don't have nearly as much of this power
(relative to some readers and contexts, but this point doesn't really
matter for our purpose, as we will see shortly), and some don't have
what they do for long while others seem to have it perennially.  This
power is by no means absolute and universal, but in context it is
manifested in some works more than in others.  It derives not just
from the component words, but also from the way they are assembled
into a consistent whole having a meaning to human readers.  The way I
define it, the sum of the parts is simply the parts unassembled, or
possibly assembled in a simple linear system.  The extra (more than
the sum) comes into the picture with the more complex *structure*, of
words, paragraphs, chapters and people for our current example.  You
may argue that the structure is ultimately the result of organizing
properties in the fundamental components, which I was unwilling to say
before (for some systems) but now concede as very likely (for *almost*
all systems).  Here is my change of mind.

As far as computers go, you may explain their operation fully in terms
of AND gates, OR gates, electromotive force, and so on.  This
explanation can be complete for its own level of abstraction, but it
hardly solves all the problems of computer science!  Programmers
rarely, if ever, try to describe high-level programs in terms of
electropotentials in the computer hardware or magnetic aberrations on
a hard disk either, if only because it is impractical to do so.  They
will usually talk about subroutines, loops, processes, etc.  The point
here is that a program transcends the means of its expression.  The
same program can exist as a magnetic pattern on a disk, a pattern of
electrical pulses in a computer memory, deposits of ink or graphite on
some paper, and patterns of electrochemical impulses in a human brain.
The (high level) program can be fully explained in a way completely
independent of the hardware it is run on by referring to its
functions, or sequence of symbolic operations.  The explanation is
complete for its own level of abstraction.  It can be explained more
reductionistically, too, but the more abstract way is usually better
(i.e., more useful) in this case.  It all depends on what you want to
describe.  My work routinely requires me to shift focus between lower
and higher levels because of the hierarchical relationships of these
models.  As I understand it, description-levels of the brain are even
more tightly bound than we originally thought.  The hierarchy is just
so vast that it is still useful to consider psychological events as
"real" in the same sense that programs are "real" (except on a MUCH
higher level of abstraction).  I doubt that we will feel the need to
dispose of such abstract ideas even if (or when) we fully "explain"
how neurons interact to produce sentience and intelligence, just as we
should not feel compelled to stop studying biology if (or when) a
Grand Unified Theory is put forth by physicists to "explain"
everything.

I wrote my (possibly) mistaken definition of emergence with good
reason, however.  Modern physics seems to make no sense unless we
consider every particle in the universe as multiple manifestations of
a single particle.  In other words, it suggests that there is really
only one particle in the whole universe!  Under this model, all
particles are in some way "contained" in all other particles, so that
reference to the fundamental parts (particles) of *everything* is
meaningless without reference to everything else.

The theory here is that total (absolute, not relative) reduction is
FUNDAMENTALLY impossible.  All explanations are made in relation to
their description-levels, including the so-called "explanations" of
reductionistic science.  So perhaps my original definition is correct
in one sense, but as long as we're stuck inside our framework of
hierarchies, incapable of explaining anything in an "absolute"
all-around way, my newer one might be more pragmatic.

The question now must be: are the current "new physics" models
correct?  Right now the facts DO point that way, but admittedly, this
could change (or we wouldn't call it science).  We have the
scientist's faith in the ultimate reducibility of everything based on
the history of science to date.  It's not an unfounded faith, yet it
is contingent on the direction the facts will take us.

-- Kev

jwtlai@watcgl.waterloo.edu (Jim W Lai) (10/03/90)

In article <8745@helios.TAMU.EDU> n025fc@tamuts.tamu.edu (Kevin Weller) writes:
>In article <3499@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu
(Marvin Minsky) writes:
>>[...]  But suppose also that some capricious God
>>imposed one extra, arbitrary law: whenever three stars form an
>>equilateral triangle, then they simply disappear.  That would appear,
>>to a classical physicist, to be an "inexplicable emergent" -- until
>>it was added as a new law of nature.
>
>True, but I might question how declaring a new 'law' "explains"
>anything.  It does from the practical standpoint of relative levels
>of abstraction, but it might not in any "absolute" sense.

It simply does not guarantee absolute truth.  Our means of inquiry do not
provide a means of determining absolute truth in the physical sciences.

danforth@riacs.edu (Douglas G. Danforth) (10/04/90)

In <3499@media-lab.MEDIA.MIT.EDU> minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes:

>The amazing thing is how rarely anything resembling an "inexplicable
>emergent" has ever reigned for very long in the history of Science --
>except for transient periods before better theories remove the need
>for the assumption of extra, special laws.  The moral is that,
>whenever you're pretty sure you are dealing with a "genuine emergent",
>you're proably making a mistake that reflects the primitive scientific
>culture of your time.  

> ....                                               That would appear,
>to a classical physicist, to be an "inexplicable emergent" -- until it
>was added as a new law of nature.

     It seems that man can not tolerate "inexplicable emergents".  Either
it is explained by existing laws or soon thereafter by new theories.  If
not then it is simply added as a new law of nature.
     Every once in a while we reremember that the blue of the sky is
truely amazing and not just a consequence of Rayleigh scattering, or that
the tug of gravity is just as mysterious whether or not affine connections
play a role.
     My sense-of-self will still be a sense of my-self even when we have
the full "explanation" of it.

--
Douglas G. Danforth   (danforth@riacs.edu)
Research Institute for Advanced Computer Science (RIACS)
M/S 230-5
NASA Ames Research Center

danforth@riacs.edu (Douglas G. Danforth) (10/04/90)

In <3499@media-lab.MEDIA.MIT.EDU> minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes:

>The amazing thing is how rarely anything resembling an "inexplicable
>emergent" has ever reigned for very long in the history of Science --
>except for transient periods before better theories remove the need
>for the assumption of extra, special laws.  The moral is that,
>whenever you're pretty sure you are dealing with a "genuine emergent",
>you're proably making a mistake that reflects the primitive scientific
>culture of your time.  

> ....                                               That would appear,
>to a classical physicist, to be an "inexplicable emergent" -- until it
>was added as a new law of nature.


(Point 1)
     It seems that man can not tolerate "inexplicable emergents".  Either
it is explained by existing laws or soon thereafter by new theories.  If
not then it is simply added as a new law of nature.
     Every once in a while we reremember that the blue of the sky is
truely amazing and not just a consequence of Rayleigh scattering, or that
the tug of gravity is just as mysterious whether or not affine connections
play a role.
     My sense-of-self will still be a sense of my-self even when we have
the full "explanation" of it (if I live that long).

(Point 2)
     I sometimes wonder how much our theories are not just recastings of
our experience (without great insight). It has happened several times that
physicists have found in the mathematics literature exactly the math they
need to solve their physics problems. Whence came the mathematics? Was it not
from abstractions of earlier physics problems (an historian of science should
be able to prove or disprove this conjecture)?
--
Douglas G. Danforth   		    (danforth@riacs.edu)
Research Institute for Advanced Computer Science (RIACS)
M/S 230-5, NASA Ames Research Center
Moffett Field, CA 94035

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/04/90)

In 1990Oct3.183522.17076@riacs.edu, Douglas G. Danforth writes

 > Every once in a while we reremember that the blue of the sky is
truely amazing and not just a consequence of Rayleigh scattering, or that
the tug of gravity is just as mysterious whether or not affine connections
play a role.

I don't agree with that.  I don't find the blue sky truly amazing.  I
do find that it sometimes activates some primitive emotions that I
don't understand -- and have trained myself to regard this as more
annoying than amazing.  This leads me to do more experiments and try
to refine existing theories.

 > My sense-of-self will still be a sense of my-self even when we have
the full "explanation" of it (if I live that long).

That's an interesting prediction.  I bet that it would be wrong, under
the condition you mention.  My own sense of self has changed a lot
after thinking about the "Society of Mind" theory for a long time.
I'm not dogmatically asserting that this particular theory is the
"full explanation", by the way. Many people have reported changes in
their sense of self after developing new theoretical views -- for
example, during psychoanalysis.  How could that be?  Because, I
suspect, your "sense of self" is not a true sense, or "in-sight".  It
is mainly an illusion, partly cultural but not arbitrary, in which the
infant builds up certain kinds of theories (basically wrong ones, by
the way) about what kind of a being it is.

Incidentally, I have the impression that part of the alleged effect of
EST therapy is training oneself to regard other people as "mere"
mechanisms.  Your ego won't be hurt so much if you think of your
opponent as a worthless, unattractive, and uninteresting collection of
machinery!  (And on the other side, I find that some people feel
assaulted by the aforementioned "society of Mind" theory -- because
they think I'm saying that they are "mere".  In my view, a superbly
organized trillion-part machine can hardly be considered mere.  But a
body with a single, structureless, causeless "soul" would indeed be
mere -- and I would consider it an insult to be cosidered to be so
formless as that.

 < (Point 2) I sometimes wonder how much our theories are not just
recastings of our experience (without great insight). It has happened
several times that physicists have found in the mathematics literature
exactly the math they need to solve their physics problems. Whence
came the mathematics? Was it not from abstractions of earlier physics
problems (an historian of science should be able to prove or disprove
this conjecture)?

Indeed, I have heard science historians argue that much of mathematics
came from earlier physics theories.  But there is another possibility
explained in an essay of mine --- "Communication with Alien
Intelligence," in @i[Extraterrestrial: Science and Alien
Intelligence,] (E. Regis, ed.) Cambridge University Press, 1985.  This
is a cute theory based on some experiments with very small Turing
machines.  It turned out that many of them performed operations that
could be interpreted as elementary addition -- while none of them did
anything that was "similar" to addition but not exactly addition!   

What that seems to mean is that the most elementary mathematics -- or,
rather, the kinds that humans have historically first imagined -- hold
a peculiar position among "all possible mathematical systems".  In a
sense, they might simply be the ones that are "easiest for a machine
to think of".  Why, then, might they help in making physics theories?
Either because the universe, too, is peculiarly simple -- whatever
that means -- or that the simplest theories are (at least ,at first)
the most useful ones -- simply because they are the first ones we can
use to make any predictions at all...

nsj@Apple.COM (Neal Johnson) (10/04/90)

In article <3549@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>
>
>In 1990Oct3.183522.17076@riacs.edu, Douglas G. Danforth writes
>
> > Every once in a while we reremember that the blue of the sky is
>truely amazing and not just a consequence of Rayleigh scattering, or that
>the tug of gravity is just as mysterious whether or not affine connections
>play a role.
>
>I don't agree with that.  I don't find the blue sky truly amazing.  I
>do find that it sometimes activates some primitive emotions that I
>don't understand -- and have trained myself to regard this as more
>annoying than amazing.  This leads me to do more experiments and try
>to refine existing theories.
>

_Primitive emotions_? Why "primitive"? Why "annoying"? Why do I find
this response leading to a point of view that is ultimately de-humanizing
since awe, mystery, and the aesthetic experience are human? Why do I feel
that you must live in a pretty barren world full of theories and
intellectualizations but no beauty? What is wrong with being awed by a
blue sky? Why must you reduce it to a better theory? Is it because
beauty can't be quantified, that mystery can't be explained? Are we
just supposed to ignore these things "untouchable" by the scientific
method? What is to be gained by this reductionism? 

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/04/90)

In an earlier note, Douglas G. Danforth asserted that

>>> Every once in a while we reremember that the blue of the sky is
truely amazing and not just a consequence of Rayleigh scattering

and I argued back that 

>> I don't find the blue sky truly amazing.  I do find that it
sometimes activates some primitive emotions that I don't understand --
and have trained myself to regard this as more annoying than amazing.
This leads me to do more experiments and try to refine existing
theories.

Then nsj@Apple.COM (Neal Johnson) retorted that

> Primitive emotions_? Why "primitive"? Why "annoying"? Why do I find
this response leading to a point of view that is ultimately
de-humanizing since awe, mystery, and the aesthetic experience are
human? Why do I feel that you must live in a pretty barren world full
of theories and intellectualizations but no beauty? What is wrong with
being awed by a blue sky? Why must you reduce it to a better theory?
Is it because beauty can't be quantified, that mystery can't be
explained? Are we just supposed to ignore these things "untouchable"
by the scientific method? What is to be gained by this reductionism?

It seems to me that njs identifies awe, mystery, and the aesthetic
experience as "human".  Well, I beg to differ.  Those, in my view are
the barren world of infantile thought.  Yes, I don't like "beauty"
because I have certain suspicions about what's happening "I like
something without knowing why".  My view is that the brain has many
parts and many processes.  And the very things Neal likes to wallow
in, I suspect, are mainly the situaitons in which certain brain-parts
are stimulated by poorly-known "innate releasing mechanisms" of the
kind described by Lorenz and Tinbergen.  Because of the -- I said
"primitive" to mean evolutionarily early -- way those brain-parts were
connected in our early ancestry, those beauty-and-mystery-and-awe
activities "take over" and inhibit the more recent developments that
evolved in our journey from monkey to sapiens.  The joke, to me, is
that THINKING is the glorious part of being human, and I have not the
slightest reason to suppose that Neal's emotions when transfixed by a
stupid blue sky or (I dare to say) stupid Rembrandt Portrait that
people think show the character of a mind from the lines in a face --
that those emotions are any more subtle or elevating than those of a
mouse under whichever conditions evoke similar cognitive arrests.  

The cream of the joke is in the suggestion that people like me live
"in a pretty barren world full of theories and intellectualizations
but no beauty."  The "but" is misplaced for two reasons.  First, it is
the world of beauty that is, in my view, barren -- because it is based
on little parts of the brain paralyzing the big parts.  It is no
accident, I say, that people can say so little about why beauty is so
great and powerful.  It is, I claim, because there's almost nothing
much to say.  You can't keep your eye off that girl because of certain
curves.  You have no choice, because your little curve-detector truns
off your huge choice-engines.  You can't bear the absurdity and
shallowness of this, and so write thousands of years of stupid poems
praising flowers (which you appreciate probably LESS intensely than a
honeybee) and likening women (with real brains) to them.  Beauty, fah.

Sorry to flame so long at such trivial matters.  Back to
philosophy/science/psychology?  Thanks, guys.  Why don't the students
argue more?

kirlik@chmsr.gatech.edu (Alex Kirlik) (10/04/90)

In article <3560@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>the world of beauty that is, in my view, barren -- because it is based
>on little parts of the brain paralyzing the big parts.  It is no
>accident, I say, that people can say so little about why beauty is so
>great and powerful.  It is, I claim, because there's almost nothing
>much to say.  You can't keep your eye off that girl because of certain
>curves.  You have no choice, because your little curve-detector truns
>off your huge choice-engines.  You can't bear the absurdity and
>shallowness of this, and so write thousands of years of stupid poems
>praising flowers (which you appreciate probably LESS intensely than a
>honeybee) and likening women (with real brains) to them.  Beauty, fah.
>
>Why don't the students argue more?

Ok, as a perpetual student I'm goaded.  You say the glorious aspect of
humanity is thinking, and since you go right to a discussion of choice
engines, I presume you don't mean thinking for thinking sake (whatever
that might mean) but thinking as it serves choice, ultimately as it
rves behavior and its attendent consequences.  Evolutionarily, 
thinking could have only evolved only if it contributed to successful
behavior in some way, and this is the "reason" we have "thinking."
Now to continue along these lines, thinking serves successful behavior
and is therefore glorious to the extent of its contribution here.

Now here's where I have problems.  What measure are we going to use
to measure success, that is, who has got the inside track on what I
should value, what the "utilities" in my choice engine should be?
Will an understanding of the mind/brain assist in this task?  I think
not.  I can choose to live in an ugly barren place or I can choose to
live in what I take to be a beautiful place.  Now you come along and
tell me don't choose the beautiful place, you only think it's beautiful
because those ancient curve detectors are firing up a storm, and hey
you're a smart guy, you want to rise above that kind of thinking don't
you?  For the life of me I don't see why should.d.

So I like pea soup, I like a flower, I like to copulate. Who are you
to tell me what I really *should* like.


Alex 

UUCP:	kirlik@chmsr.UUCP
        {backbones}!gatech!chmsr!kirlik
INTERNET:	kirlik@chmsr.gatech.edu

smeagol@eng.umd.edu (Kenneth A. Hennacy) (10/04/90)

In article <8746@helios.TAMU.EDU> n025fc@tamuts.tamu.edu (Kevin Weller) writes:
>
>If something (brain, language, whatever) lacks enough components
>("bits" in computer terminology) to represent...information
>relative to a predetermined "standard"...then it is simply not complex 
enough to hold the information we are looking for in it.  

Your careful mention of "standard" representation of information is crucial
to a discussion on emergent properties I think.  Somehow, a reference or 
"standard" within the brain must have been set up.  I carefully use "set up"
rather than "given".  The concept of an emergent ai seems to require both the
concept of self-generated organization and externally-stimulated organization.

As far as # bits neccessary for representation, this is only required for 
consistancy of interpretation, not complexity.  Minsky was referring to 
looking at the curves of women, so I'll use this example.  The ideas, and 
amount of information contained in this process could be exceedingly complex,
(bizarre even!) yet I could easily be limited in conveying these ideas to
you, but don't blame me, blame the fact that 

1) our brains are finite so we use finite number of symbols
2) due to finite # of symbols, we calculate the meaning of sentences
3) because we calculate with little info, much assumptions are involved.

Also, some thoughts are not sequential, and so using a sequential channel
of communication can introduce misleading notions.  As an example, I 
refer you to the notion that the universe is actually 1 particle.  This 
requires you to abandon the notion of simultaneous occurances.  However,
many of our theories, (relativity), perturbation calculations in 
quantum field theories, etc. require such notions.  

Ken Hennacy

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/04/90)

kirlik@chmsr.UUCP (Alex Kirlik) says,

>  I presume you don't mean thinking for thinking sake (whatever
> that might mean) but thinking as it serves choice, ultimately as it
> serves behavior and its attendent consequences.  Evolutionarily,
> thinking could have only evolved only if it contributed to successful
> behavior in some way, and this is the "reason" we have "thinking."
> Now to continue along these lines, thinking serves successful behavior
> and is therefore glorious to the extent of its contribution here.

Well, I was more concerned, not with choice and success in general,
but with finding things out.  Like, two hundred years ago, no one
would have understood what Alex said, about why thinking might have
evolved.  

But then Alex pursues a different track:

> Now here's where I have problems.  What measure are we going to use
> to measure success, that is, who has got the inside track on what I
> should value, what the "utilities" in my choice engine should be?
> Will an understanding of the mind/brain assist in this task?  I think
> not.  I can choose to live in an ugly barren place or I can choose to
> live in what I take to be a beautiful place.  Now you come along and
> tell me don't choose the beautiful place, you only think it's
> beautiful because those ancient curve detectors are firing up a storm,
> and hey you're a smart guy, you want to rise above that kind of
> thinking don't you?  For the life of me I don't see why should.  So I
> like pea soup, I like a flower, I like to copulate. Who are you to
> tell me what I really *should* like.

I actually had a point to make that illuminates this problem, though
it doesn't solve it.  I wasn't telling you what to do.  I was saying
something quite different: that maybe you might say to yourself, "Am I
really liking this?  What am "I", indeed?  When one part of my brain
"likes" something very much, is it possible that there are other parts
of my brain -- maybe much <larger, better, more evolved -- whatever
you think>> that are being suppressed, put out of it, deprived of life
and liberty etc.  Ask yourself (as some stoic philosophers did, I
suspect) -- "Who are those little proto-mammalian pleasure centers in
my brain to tell me what I really *should* like.

So I wasn't saying what to do, only suggesting that you look more
thoughtfully at what may already be telling "you" what to do.  Don't
let your mind kick you around.

rjf@canon.co.uk (Robin Faichney) (10/04/90)

In article <3499@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>The amazing thing is how rarely anything resembling an "inexplicable
>emergent" has ever reigned for very long in the history of Science --
>except for transient periods before better theories remove the need
>for the assumption of extra, special laws.  The moral is that,
>whenever you're pretty sure you are dealing with a "genuine emergent",
>you're proably making a mistake that reflects the primitive scientific
>culture of your time.  The longest holdout was "life", or the vital
>spirit, whose reduction commenced with Pasteur (and Darwin) and was
>pretty much buried with Watson-Crick.  
>
>A present-day holdout is "consciousness", and this is well illustrated
>by Penrose's dogmatic naivites.

Amazing, isn't it, how a person with such a reputation in one field,
can come such a cropper in an unrelated one!

>It is no accident, I suppose, that he
>does not cite the suggestions I made about consciousness in "The
>Society of Mind", in which I suggest that most of the phenomena
>involved are related to (limited amounts of) short term memory.  If
>so, future AI machines will be much more conscious than humans are,
>and may also have much less sense of mystery about it.

I'm afraid I haven't read "The Society of Mind" (though as it happens I
noticed it on a colleague's desk earlier today), but I'm interested in
the concept of consciousness as related to function.  I had occasion
several years ago to look into work on consciousness by experimental
psychologists, and I came to the conclusion that though it is obvious
that certain functions are closely associated with consciousness in us,
the presence of such functions in a machine would not be sufficient
evidence that the machine was conscious.  Simply because it could
always be asserted that in the machine, the functions were being
performed by an unconscious mechanism.  My problem is that I cannot
imagine any counter-argument to this -- if we agree that no current
machine is conscious, why should we believe any future machine to be so
-- it could perform indistinguishably from a person, while being
"nothing but" an unconscious object.

This is why I think that deciding that something is conscious --
whether that thing is your kid brother, your PC, or an android which
has fooled you that it's your mother -- says more about you than about
the thing your're talking about.

Seriously, though (and maybe I should say that I haven't looked at
comp.ai in a couple of years), what could ever be sufficient evidence
for machine consciousness?

kas@cs.aber.ac.uk (Kevin Sharp) (10/04/90)

Maybe I'm just ridiculously naive, but I've always believed that an
emergent property was simply one which was not *expected* when the
system was designed. This usually only refers to desirable properties
--- others being termed bugs :-)  

Many of the examples cited in earlier postings exhibit emergence
because it is difficult to predict the behavior of the system eg.
non-linear or chaotic systems, or those with many interacting parts.

Of course unexpected behavior can always be viewed as the lack of
adequate foresight, or to quote Pope...

  "He who expects little will never be disappointed"

--
--
Kevin Sharp,                      UUCP : {WALES}!ukc!aber-cs!kas
AI and Robotics Research Group,   JANET: kas@uk.ac.aber.cs
Department of Computer Science,   PHONE: +44 970  622450
University College of Wales, Aberystwyth, Dyfed, UK. SY23 3BZ

burley@world.std.com (James C Burley) (10/05/90)

In article <3565@media-lab.MEDIA.MIT.EDU> minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes:

   So I wasn't saying what to do, only suggesting that you look more
   thoughtfully at what may already be telling "you" what to do.  Don't
   let your mind kick you around.

Assuming certain popluar scientific models of the mortal human brain are
valid, would another way to phrase the last sentence be "Don't let your
lizard-brain tell kick your chicken-brain around, and don't let your
chicken-brain kick your human-brain around."?  I'm trying to get a handle
on what you mean by not letting your mind kick you around -- I mean I THINK
I understand what you mean (to whatever extent I may be said to think), but
it is likely I'm mistaken.  (And we do still have the problem of trying to
determine what part(s) of the brain are "right" about anything: if we are
all machines, why are human machines and their correlative extra brainage
in any way more useful or valuable than mammals, reptiles, or even krill,
plankton, algae...?  We're certainly not more numerous, and show no signs
of outlasting most of the other species.  Put another way: if we let our
baser drives take over, spent less time pounding on keyboards, driving
cars, and wasting the planet's resources, and instead pursued simple food,
drink, and procreation to pretty much the exclusion of all else, and did this
on a mammoth (say 5 billion!) scale, would we last longer as a species?  Yet
is length of species existence even important?  Ah well.)

James Craig Burley, Software Craftsperson    burley@world.std.com

rolandi@sparc9.hri.com (Walter Rolandi) (10/05/90)

In article <3560@media-lab.MEDIA.MIT.EDU>,
minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes:


> It seems to me that njs identifies awe, mystery, and the aesthetic
> experience as "human".  Well, I beg to differ.  Those, in my view are
> the barren world of infantile thought.  Yes, I don't like "beauty"
> because I have certain suspicions about what's happening "I like
> something without knowing why".....


Does this mean that you do like things if/when you know why?  Would you
say that you know how to know why you like things?


--
------------------------------------------------------------------------------

                            Walter G. Rolandi          
                          Horizon Research, Inc.       
                             1432 Main Street          
                         Waltham, MA  02154  USA       
                              (617) 466 8367           
                                                       
                             rolandi@hri.com           
------------------------------------------------------------------------------

sarima@tdatirv.UUCP (Stanley Friesen) (10/05/90)

In article <1990Oct4.154655.23004@canon.co.uk> rjf@canon.co.uk writes:
>
>  I had occasion
>several years ago to look into work on consciousness by experimental
>psychologists, and I came to the conclusion that though it is obvious
>that certain functions are closely associated with consciousness in us,
>the presence of such functions in a machine would not be sufficient
>evidence that the machine was conscious.  Simply because it could
>always be asserted that in the machine, the functions were being
>performed by an unconscious mechanism.  My problem is that I cannot
>imagine any counter-argument to this:

The conunter-argument is simple.  This is also true of the human brain!
No individual neural mechanism in the brain is conscious, nor is any
individual subsystem in the brain conscious.  Consciousness is the result
of the sum of the activities and interactions of many components and
mechanisms within the brain.  Thus if implementation via unconscious parts
denies consciousness, then *we* are not conscious either, we just think we
are.

> -- if we agree that no current
>machine is conscious, why should we believe any future machine to be so
>-- it could perform indistinguishably from a person, while being
>"nothing but" an unconscious object.

Because we do not agree that no current machine is conscious - we all agree
that the human machine is indeed conscious.  [Note this is still true even if
an essential part of the implementation depends on quantum uncertainty or
chaotic unpredictability, since both are essentially physical mechanisms]

>This is why I think that deciding that something is conscious --
>whether that thing is your kid brother, your PC, or an android which
>has fooled you that it's your mother -- says more about you than about
>the thing your're talking about.

You're probably right here.

>Seriously, though (and maybe I should say that I haven't looked at
>comp.ai in a couple of years), what could ever be sufficient evidence
>for machine consciousness?

I, personally, would be convinced by the robot described in the open letter
posted to comp.ai a short time ago.  At least if I knew its implementation
was essentially as described.  [It is interesting that the mechanism the
author described are ones that I have long thought central to sentience!]

---------------
uunet!tdatirv!sarima				(Stanley Friesen)

cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) (10/05/90)

In article <3560@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>the very things Neal likes to wallow
>in, I suspect, are mainly the situaitons in which certain brain-parts
>are stimulated by poorly-known "innate releasing mechanisms" of the
>kind described by Lorenz and Tinbergen.  Because of the -- I said
>"primitive" to mean evolutionarily early -- way those brain-parts were
>connected in our early ancestry, those beauty-and-mystery-and-awe
>activities "take over" and inhibit the more recent developments that
>evolved in our journey from monkey to sapiens.  The joke, to me, is
>that THINKING is the glorious part of being human, and I have not the
>slightest reason to suppose that Neal's emotions when transfixed by a
>stupid blue sky or (I dare to say) stupid Rembrandt Portrait that
>people think show the character of a mind from the lines in a face --
>that those emotions are any more subtle or elevating than those of a
>mouse under whichever conditions evoke similar cognitive arrests.  

Well.  .  .a hallmark of evolving systems is that they tend to grow
monotonically, meaning that earlier forms are retained.  Thus your whole
life is dependent on metabolic pathways inhereted from bacteria, and
neoteny as an evolutionary tendency.  This said, some questions/points,
Dr.  Minsky:

1) Surely we must say that ALL of what we are is human.  Is there any a
priori basis other than prejudice to say that it is the later forms of
evolution that are more valuable?

2) Let's try to separate aesthetics from emotions.  I agree that
emotions are evolutionarily prior.  Indeed, trophisms can be understood
as primitive emotion, and thus surely all animals have emotions.  In
fact, emotinos are critical to survival, motivating people to eat,
sleep, etc.  Would you agree that achievement is necessarily motivated
by emotion, the joy of success, the fear of failure.  Without these, Dr. 
Minsky, no doubt you would not be able to achieve what you have in your
life. 

3) But on the other hand, there is every indication that *aesthetics*
are *not* evolutionarily prior, but rather closely correlated with
general human intelligence.  No other species decorate themselves.  The
earliest stone tools show aesthetic elaborations unnecessary for
function.  No doubt somewhere in the vast huge brain are mechanisms for:
a) hearing pitches, intervals, rhythms; b) color patterns; c) geometric
forms, etc.  These are inherent *human* qualities, as is humor.  I
suspect that these features of mind are *necessary* for reasoning, etc. 

4) While aesthetics cause emotional reactions, *everything* causes
emotional reactions, including rational thought.  The exhiliration of
intellectual discovery is why I do this stuff.  And I note with irony
how vehement your comments are in discussing these issues.  Further,
rational thought causes aesthetic reactions.  Many have noted that
scientists accept theories on the basis of their *beauty* and *elegance*
where other criteria fail. 

5) Mice cannot be emotionally motivated by a Rembrandt.  Many tribal
people cannot recognize their own faces in photographs.  Like language,
aesthetics is a *learned skill*, an elaboration based on innate mental
capabilities.  My father hates David Bowie.  The arts have *evolved in
culture as science has*, and are every bit as "highly evolved" as the
rational sciences. 

>Sorry to flame so long at such trivial matters.  Back to
>philosophy/science/psychology?  

These matters are not trivial.  They go to the whole basis of psychology
as an evolved capacity of surviving organisms.

To conclude, it seems extremely short-sighted to dismiss the
emtional/aesthetic as subordinate to the rational.  Aside from the
dubious theoretical basis, this attitude is extremely dangerous.  It
cuts off the *means* of living (how to live) from the *ends* (why live
at all), as the "Western" ideology threatens the viability of life on
the planet through the imperative for economic expansion at all costs. 

I suspect that your view is de-selective, and its extinction will ensue. 
I hope that it will only be the extinction of the *view*, and not of
yourself, or worse yet myself and many, many others. 


-- 
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjoslyn@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Box 1070, Binghamton NY 13901, USA
V All the world is biscuit shaped. . .

jgsmith@watson.bcm.tmc.edu (James G. Smith) (10/06/90)

Just a few quickies.  (Oh how long for the eloquence of others.)

1.  Conscious machines.

In someone's discussion about what will be required to consider machines
conscious, it seemed to be that they were including being human in the 
definition of consciousness.  My conclusion is that we will be able to call
a man-made machine conscious after we have examined the machine which is
ourselves and determined how "consciousness" emerges from that machine.  Those
man-made machines which mimic that emergence will be considered conscious.   My
problem with this is that "consciousness" is a spectrum, and to decide that
something is conscious or not is to create an arbitrary cut off point.

2.  Intelligence as better for life.

The question was raised as to whether it was better to be intelligent, and
perhaps rational.  The answer to that depends on choosing a goal, and that
choice is arbitrary.  If the goal is to be happy, the well known answer is
probably not. (Ignorance is bliss).  If the goal is survival of the species,
I would say that chances are fairly good that it is better to be intelligent.
My conclusion is based mostly on the idea that a species that can locate itself
on more than one planet is most likely to survive longest, and my bet is that
only an intelligent species is going to be able to do that efficiently.  (Other
species will certainly tag along, but are not as likely to become as wide 
spread as the intelligent one.)

*
(student of Immunology, BTW)

n025fc@tamuts.tamu.edu (Kevin Weller) (10/08/90)

In article <1990Oct4.154655.23004@canon.co.uk> rjf@canon.co.uk (Robin Faichney) writes:

   ......................................  My problem is that I cannot
   imagine any counter-argument to this -- if we agree that no current
   machine is conscious, why should we believe any future machine to be so
   -- it could perform indistinguishably from a person, while being
   "nothing but" an unconscious object.

   This is why I think that deciding that something is conscious --
   whether that thing is your kid brother, your PC, or an android which
   has fooled you that it's your mother -- says more about you than about
   the thing your're talking about.

   Seriously, though (and maybe I should say that I haven't looked at
   comp.ai in a couple of years), what could ever be sufficient evidence
   for machine consciousness?

I can't discern your exact position on the issue here.  Are you saying
that you don't see sufficient evidence that *other* *people* are
conscious?  If not, how can you claim that no evidence would ever be
sufficient to decide that a machine is conscious?  If someday someone
builds an android that looks just like a person and *acts* just like a
person, how could you tell the difference?  ESP?  :-)

-- Kev

n025fc@tamuts.tamu.edu (Kevin Weller) (10/08/90)

In article <1990Oct4.045104.24620@eng.umd.edu> smeagol@eng.umd.edu (Kenneth A. Hennacy) writes:

   Also, some thoughts are not sequential, and so using a sequential channel
   of communication can introduce misleading notions.  As an example, I 
   refer you to the notion that the universe is actually 1 particle.  This 
   requires you to abandon the notion of simultaneous occurances.  However,
   many of our theories, (relativity), perturbation calculations in 
   quantum field theories, etc. require such notions.  

   Ken Hennacy

My knowledge of relativity theory is pretty good (much better than
that of quantum mechanics & friends).  By my understanding, relativity
is the very theory that demolishes the simultaneity idea.  Observers
in two different frames of reference can observe the passing of two
events at different times when they would have observed the same
events as simultaneous if their frames of reference were in sync (at
rest with respect to one another).  Simultaneity is a practical
convenience only, like Newtonian physics for low relative speeds.

I don't know about the other theories you mention.  If they actually
*require* literally simultaneous occurences, then they may be in
conflict with relativity theory.

The quantum bootstrapping hypothesis I used as my example is but one
of many modern attempts to explain the weirdnesses of quantum events.
There are many others, and holistic themes are common among them.

-- Kev

jhess@orion.oac.uci.edu (James Hess) (10/09/90)

In article <8629@helios.TAMU.EDU> n025fc@tamuts.tamu.edu (Kevin Weller) writes:
>In article <1990Sep27.185805.21493@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

>>  A second example: if you put large pine forrests, rabbits and foxes
>>  together in northern Canada, you will get a 10 year cycle of boom
>>  and bust in the populations of rabbits, foxes and young pine trees.
>>  (Rabbits LOVE to eat pine needles, far more than carrots.)  Is this
>>  pattern an emergent property?  From the perspective of a naive and
>>  innumerate individual, the answer is certainly yes.  The cycle is
>>  there, it was not predicable (by them) and it is not easy to 
>>  identify the source in myopic analysis of rabbits, foxes, or pines.
>>  From the perspective of an ecologist or someone versed in simple
>>  dynamical systems theory, it is not an emergent property.  It can
>>  be predicted, modeled, and well explained, based on properties of
>>  the constituent elements, e.g. kilocalories needed, supplied, 
>>  gestation period, etc.
>>
>>  Again, I would expect advocates of the "emergent property" term to
>>  be somewhat bothered by this situation...
>
>On the contrary, I am not bothered by it at all.  Your example is
>simply not one of emergence.  I don't claim that ALL such phenomena
>are emergent.
>
>What I am trying to say is that SOME phenomena CAN NOT be explained
>SOLELY on the basis of component properties.  

I don't quite agree with Kevin in this instance.  An emergent property is
one that emerges from the component properties AND their arrangement into a
system where their INTERACTIONS give rise to properties that are not 
inherent in the component properties alone.  The ten-year cycle above is 
emergent in the system; it would not occur if one of the components was 
removed.

In a sense, the discussion of emergent properties is nothing new.  It becomes
more salient in contemporary analysis as we look at systems that are more 
complex, have more interactions, and the interactions play a more important
part in the behavior of interest.  In particular, philosphers discussing the 
mind/brain problem have often had difficulty understanding how consciousness,
cognition, or intelligence could arise.  It either had to be inherent in a 
component, or be the immaterial gift of God--a mind acting on the brain.

Incidentally, my favorite example of an emergent property was given by Marvin
Minsky in "Society of Mind".  The function or purpose of a box is generally
to contain something--yet any side alone or any subset of its sides have no
property of containment.
>
>>  are unable to explain the observations based on properties deduced
>>  from these atoms, they reach for terms such as "emergent properties"
>>  rather than doing good science and looking to reformulate the basic
>>  hypothosis in new ways.  Hiding behind a pseudo-science of "emergent
>>  properties" will probably delay the real struggle: to find more
>>  suitable analytic tacts and more suitable atoms to form the 
>>  foundation of a "proper" scientific explanation.
>>
It seems from Gary's discussion that he thinks emergence refers to some kind
of unanticipated, undeducible property that scientists use when analysis fails.
Instead, it is precisely the more suitable analytic tactic he calls for.

He might then ask, was is not possessed of emergent properties?  I suggest he 
eat sugar-coated cornflakes for breakfast tomorrow and ask himself if they are
possessed of any properties not wholey a property of its parts.

jhess@orion.oac.uci.edu (James Hess) (10/09/90)

In article <ZCN%#J*@rpi.edu> kyriazis@iear.arts.rpi.edu (George Kyriazis) writes:
>
>Humans are extremely inconsistent and unpredictable, as opposed
>to neurons or whatever else forms other organized systems.  This
>increases the randomness of the system and the resulting global
>behavious is not so stable to be characterized organized.  Now, here
>is the flip side:  Neurons definetely cannot comprehend human
>behaviour, so a human (being part of a society) cannot comprehend
>the behaviour of the society.  So, even if the organized behaviour of 
>the human society exists, I think we won't be able to realize its
>existance!  Monitoring humanity for long periods of time will
>be valuable for the understanding of the path of the human society,
>and maybe predicting its future, but I don't think it's going to
>get anywhere.
>
You can stretch analogy too far.  But indeed, understanding the behavior of 
human society (sociology, social psychology, anthropology, economics (Ugh!))
is the true hard science.  Physics and chemistry are for wimps.  

Competing suggestions for further thought:

Cybernetics includes the study of ways of building reliable systems from 
unreliable components.  (Redundancy, feedforward, feedback, error correcting)

Predicting the future of human society--organized behavior does exist, and we 
use our recognition of it to guide our choices.  But there are too many 
variables to model--we must simplify and reduce.  Here we lose much of the 
variance of the system.  This variance is often the source of novelty and 
social change.  Think of mutation and evolution.  And chaos theory shows that 
we can't  always predict the behavior of some very simple recursive systems 
with one variable.

jhess@orion.oac.uci.edu (James Hess) (10/09/90)

Speaking of Penrose, who wants to re-introduce Platonic ideals through the 
limitations of axiomatic mathematics and the failure of reductionism:

Let the set of axioms be the sticks of various links in a set of tinkertoys,
and the allowable operations be defined by the holes in the wheel-shaped hub 
pieces.  As we operate on the sticks we begin to build structures.  Let these
be our theorems.  Now some structures will be in the set of all allowable 
structures, and some will not.  By experimenting with the axioms, operations, 
and theorems we will discover which are part of the set.  Can we specify in 
advance whether we will be able to join two sticks of given length at a given 
angle at some point in space with reference to our origin?  If we cannot, does
this mean there is some Platonic set of tinkertoy structures that we are 
discovering, or that this set is determined by our specification of the axioms
and operations?  If we change an axiom in an interesting way, we change the 
set of possible structures.  Some sets will be highly interesting, some won't.

Just how arbitrary is mathematics, anyway?

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/10/90)

In article <2711556F.7484@orion.oac.uci.edu> jhess@orion.oac.uci.edu (James Hess) writes:
>

>Competing suggestions for further thought:
>

>Predicting the future of human society--organized behavior does exist, and we 
>use our recognition of it to guide our choices.  But there are too many 
>variables to model--we must simplify and reduce.  Here we lose much of the 
>variance of the system.

  What?  Nonsense!  Asimov invented it years ago!  Or rather, "Hari Seldon".
It's called: Psycho-history!  :>
--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

smoliar@vaxa.isi.edu (Stephen Smoliar) (10/10/90)

In article <4152@bingvaxu.cc.binghamton.edu>
cjoslyn@bingvaxu.cc.binghamton.edu.cc.binghamton.edu (Cliff Joslyn) writes:
>In article <3560@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu
>(Marvin Minsky) writes:
>>  The joke, to me, is
>>that THINKING is the glorious part of being human, and I have not the
>>slightest reason to suppose that Neal's emotions when transfixed by a
>>stupid blue sky or (I dare to say) stupid Rembrandt Portrait that
>>people think show the character of a mind from the lines in a face --
>>that those emotions are any more subtle or elevating than those of a
>>mouse under whichever conditions evoke similar cognitive arrests.  
>
>To conclude, it seems extremely short-sighted to dismiss the
>emtional/aesthetic as subordinate to the rational.

Having read both sides of this story, my own opinion is that such a dismissal
would constitute a misreading of Minsky's original observation.  The problem
here seems to stem from Cliff's attempt to use "rational" in his interpretation
of Minsky's use of the word "thinking."  Now perhaps too much exposure to THE
SOCIETY OF MIND is beginning to go to my head (play on words sort of intended);
but one of the joys of that book was that it shook me free of any instinct to
try to equate those two words.  To try to reduce the matter to the brink of
over-simplification, thinking is what we do with our minds as we interact with
the world around us.  It is not necessarily rational according to many (most?)
existing standards of rationality in logic (and perhaps epistemology, as well).
Indeed, even if we give up the logical position and pursue the course of
philosophers who simply wish to account for explanatory laws, we are still
liable to be frustrated.  The fact is that there are plenty of things which
we do with our minds which are downright irrational, and that it one of the
things which makes us human.

Having cleared up that matter, I would say that we probably have a situation
in which what we do with our emotions is prior to what we do with sentential
forms (putting dispositions ahead of propositions, as Minsky did in his
original K-lines paper).  However, what we do with our minds (which is
to say "thinking") is prior to any manifestation of aesthetics in our
behavior.  Since I can't see a blue sky here in Los Angeles, let me pick
on the sea instead.  I would argue that there is no such thing as a spontaneous
response to the sight of the sea.  We cannot avoid responding to the sea on the
basis of any number of memories we have had, including books we have read,
movies we have seen, and (particularly in my own case) music we have heard.
(Vaughan Williams is forever with me.)  In other words whenever I react to
the sight of the sea, my mind is VERY BUSY, indeed;  and if it were not busy,
I would not be having that reaction.  If I detach my mind from the experience,
the sea becomes just as "stupid" as Minsky's blue sky and Rembrandt collection
(which I cannot look at without remembering that old Charles Laughton movie,
just to take another shot at the same point).  The only thing which is
troubling about Minsky's position is that it turns our attention away
from any sense of aesthetic universals, but I would say that aesthetic
theory has needed that kick in the pants for quite some time.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

gaben@microsoft.UUCP (Gabe NEWELL) (10/11/90)

In article <3565@media-lab.MEDIA.MIT.EDU>, minsky@media-lab.MEDIA.MIT.EDU 
(Marvin Minsky) writes:
> I actually had a point to make that illuminates this problem, though
> it doesn't solve it.  I wasn't telling you what to do.  I was saying
> something quite different: that maybe you might say to yourself, "Am I
> really liking this?  What am "I", indeed?  When one part of my brain
> "likes" something very much, is it possible that there are other parts
> of my brain -- maybe much <larger, better, more evolved -- whatever
> you think>> that are being suppressed, put out of it, deprived of life
> and liberty etc.  Ask yourself (as some stoic philosophers did, I
> suspect) -- "Who are those little proto-mammalian pleasure centers in
> my brain to tell me what I really *should* like.
>
> So I wasn't saying what to do, only suggesting that you look more
> thoughtfully at what may already be telling "you" what to do.  Don't
> let your mind kick you around.

I was thinking about this a little.

It reminded me of problems I have experienced in thinking about psycho-
analysis.  Basically for these domains, the words "I", "know", and "choose"
are not particularly useful concepts - the lead to confusion more than they
do to enlightenment or adaptive decision making or self-awareness or
clinical insight.  (A simplistic concept of "I" is a barrier to kinds of
self-knowledge.)

For example which is "I", the "proto-mammalian pleasure centers", or the part
of me that is asking the question "Who are ... *should* like".  Should
I attempt to address them as a whole, or should I try to have a test that
allows me to select the most useful definition of "I" for a given decision.

Clinical experience in couple or family counseling can give lots of
evidence of confusion about "I" and poor representation of "choose"
leading to maladaptive behavior and not particularly useful internal
representations of situations.

My question then is what paradigm is currently available that can richly
address subtleties of "I" "choose" and "know"?

gaben@microsoft.UUCP (Gabe NEWELL) (10/12/90)

I mentioned to a friend of mine the discussion of beauty
and intelligence.  He sent me the following quote, which,
although entertaining, seems to demand some context which
I cannot supply.  Here it is for what it is.


From cameronm Thu Oct 11 18:26:25 1990
To: gaben 
Subject: Charles Fort
Date: Thu Oct 11 17:17:48 1990
<EndOfHeader>


Every science is a mutilated octopus.  If its tentacles were not 
clipped to stumps, it would feel its way into disturbing contacts.  
To a believer, the effect of the contemplation of a science is of
being in the presence of the good, the true, and the beautiful.  
But what he is awed by is the mutilation.  To our crippled 
intellects, only the maimed is what we call understandable, because 
the unclipped ramifies into all other things.  According to my 
aesthetics, what is meant by beautiful is symmetrical deformation. 

-- Charles Fort 

rjf@canon.co.uk (Robin Faichney) (10/12/90)

In article <58130@microsoft.UUCP> gaben@microsoft.UUCP (Gabe NEWELL) writes:
>In article <3565@media-lab.MEDIA.MIT.EDU>, minsky@media-lab.MEDIA.MIT.EDU 
>(Marvin Minsky) writes:
>> What am "I", indeed?
>> [..]
>> Who are those little proto-mammalian pleasure centers in
>> my brain to tell me what I really *should* like.
>> [..]
>> Don't let your mind kick you around.
>
>I was thinking about this a little.
>
>It reminded me of problems I have experienced in thinking about psycho-
>analysis.  Basically for these domains, the words "I", "know", and "choose"
>are not particularly useful concepts - the lead to confusion more than they
>do to enlightenment or adaptive decision making or self-awareness or
>clinical insight.  (A simplistic concept of "I" is a barrier to kinds of
>self-knowledge.)
>
>For example which is "I", the "proto-mammalian pleasure centers", or the part
>of me that is asking the question "Who are ... *should* like".  Should
>I attempt to address them as a whole, or should I try to have a test that
>allows me to select the most useful definition of "I" for a given decision.
>
>Clinical experience in couple or family counseling can give lots of
>evidence of confusion about "I" and poor representation of "choose"
>leading to maladaptive behavior and not particularly useful internal
>representations of situations.
>
>My question then is what paradigm is currently available that can richly
>address subtleties of "I" "choose" and "know"?

I don't know of any such paradigm, but I tend to doubt the need for
one.  Not that I poo-poo the problem -- I have become aware of it in a
very similar context.  But it seems to me that the major hurdle for
people in dealing with the plurality of "I"s is just in realising that
fact.  The primitive concept of the "I" is very solid -- independent
and individual (note I'm not saying the people are like that, just
their "I" concepts).  All you can do with that is encourage someone to
notice the transition, having first suggested the possibility, of
course.  When a person begins to realise that he can actually be
different people at different times, then the natural progression is to
watch out for that and gain skill in dealing with it.

So what I'm saying is that this is an irreducibly experiencial
phenomenon, not likely to be helped (much) by the projection onto it of
any conceptual framework.

Is this really the right group for this issue?  Seems a little too
"humanistic" to me.  :-)

fostel@eos.ncsu.edu (Gary Fostel) (10/13/90)

    In spite of the flood of suggestions for what "emergent properties"
    might be, I remain uncertain and quite skeptical of the value of the
    term.   My intent is not to procure a new flood of attempts at
    defining or justifying this term -- the last umpteen are plenty.
    An unsympathetic definition might be that an emergent property is
    one that is not or was not predicted from localized properties 
    of the elements which, when combined, produce the new property.
    My favorite unsympathetic example: from the perspective of
    a Martian, an emergent property of a collection of parts and tools
    and an engineer, might be the production of a useful machine.  Of
    course the martian does not know that the engineer is a particualrly
    important part.  We know that and using "emergent property" seems
    absurd -- from OUR perspective.  

    I suspect that what is and is not going to fall under many peoples
    notion of "emergent" is going to depend on the level of understanding
    of the people and the moment. The more easily a property can be
    predicted using availble methods, the less likely it is to be an
    "emergent property".  For example, many people seem willing to use
    this term for functional properties of neural systems, but I wonder
    if as many would be comfortable with the statement that a tabular
    printout is an emergent property of a particular set of Cobol
    statements.  After all, the table is not at all readily predicted
    from any one of the cobol statements. 

    A more interesting (to me) issue is whether there might be some
    properties that really are VERY hard to predict or model based on
    the constituents.  For example, non-linear dynamical systems are
    often essentially impossible to predict, not due to lack of theory
    but for intrinsic reasons -- so called chaos theory.  In the domain
    of artificial intelligence (or perhaps right outside it :-) are 
    some people who argue that human intelligence can not be duplicated
    or modeled because of the subtle but undeniable infusion of EVERY
    detail of life into the decisions and thoughts of a moment. If
    memory serves me, Penfield is a recent example of this group. 

    Now, it is not so difficult to produce a model that does things
    like the things a chaotic system does -- it is only hard to 
    make a model that does the same thing a chaotic system would do.
    An interesting property, "P", that systems might have, would be
    that they produce behaviors that are drawn from a well defined 
    set of behaviors, even though the direct prediction of which
    behavior is intrinsically intractable.
   
    A cobol program does not have property P, since it's behavior is
    quite predictable; a set of neurons (esp real neurons) might not
    be so easy to predict.  Neural "programming" is more a question
    selecting alternative neural nets from a set of possible nets until
    the bahavior happens to be the one desired.  Such a system might 
    well have property P if it could be shown that the selection 
    strategy was really the only way to get the desired behavior
    with probability 1.  

    The selection stratgy for a neural net may bother some folks,
    who feel they "design" nets. In the case of artificial nets, it
    is probably true that the net can be apriori "designed" and then
    built.  I would say that those systems do not have property P.
    Natural nets and some synthetic nets, are often "trained"
    which really means that a sequence of nets are produced, with the
    sequence terminating when a net with the desired behavior is 
    found.  If it were shown that specific behavior of one of these
    nets was not predictable from any feasible set of measurements of
    properties of the net then the selection scheme would be required
    and the final result would have property P. 

    Perhaps my property P is what others are calling "emergence" and
    I am just befuddled, or perhaps P is something else and I'm still
    befuddled anyway.   If you would like to spend some time sharing
    my befuddlement, consider whther there is a relationship between
    systems with property P and problems which are NP complete.  In
    each case it seems that there is not a way to "get inside" the
    problem, and search  may be the only way to go.
 
----GaryFostel----

schraudo@beowulf.ucsd.edu (Nici Schraudolph) (10/14/90)

n025fc@tamuts.tamu.edu (Kevin Weller) writes:

>My knowledge of relativity theory is pretty good (much better than
>that of quantum mechanics & friends).  By my understanding, relativity
>is the very theory that demolishes the simultaneity idea.
>[...]

>I don't know about the other theories you mention.  If they actually
>*require* literally simultaneous occurences, then they may be in
>conflict with relativity theory.

It is indeed the case that the two cornerstone theories of modern physics,
Relativity and Quantum Mechanics, directly contradict each other.  This
tends to worry young physicists a great deal - until they realize that it's
a wonderful way to ensure continued funding... :-)

-- 
Nicol N. Schraudolph, C-014                      "Big Science, hallelujah.
University of California, San Diego               Big Science, yodellayheehoo."
La Jolla, CA 92093-0114                                     - Laurie Anderson.
                          nici%cs@ucsd.{edu,bitnet,uucp}

sarima@tdatirv.UUCP (Stanley Friesen) (10/14/90)

In article <1990Oct12.214636.7945@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>    I suspect that what is and is not going to fall under many peoples
>    notion of "emergent" is going to depend on the level of understanding
>    of the people and the moment. The more easily a property can be
>    predicted using availble methods, the less likely it is to be an
>    "emergent property". ...
>     but I wonder
>    if as many would be comfortable with the statement that a tabular
>    printout is an emergent property of a particular set of Cobol
>    statements.  After all, the table is not at all readily predicted
>    from any one of the cobol statements. 

I would have no problem with this.  The operation of any computer program is
an emergent property of the individual instructions that compose it.  This is
because i t is the *organization* of the instructions that determine the
program's behavior, not the set of instructions themselves.  (There are many
different programs that can be constructed from any given collection of
instructions, so the program behavior is not even theoretically predictable
from the indiidual instructions).  And since the organization of the
instructions is a global property of the entire program, there is no lower
level at which the total behavior can be characterized, thus the behavior is
emergent.

>    A more interesting (to me) issue is whether there might be some
>    properties that really are VERY hard to predict or model based on
>    the constituents.  For example, non-linear dynamical systems are
>    often essentially impossible to predict, not due to lack of theory
>    but for intrinsic reasons -- so called chaos theory.

This is not related to emergence at all. It is a matter of computational
hardness.  The full properties of a chaotic system are inherent in the simplest
description of the system, they are simply unrecoverable.   Thus no *new*
properties are produced at 'higher levels', the existing properties are simply
made partially visible.

This *is* an important question.  It is the basic reason why pure deductive
reasoning is of limited value in the real world.  It is why all living things
use heuristic analysis of some sort to produce the merely *probable* rather
than the *certain* result of traditional computation theory.  This suggests
that any 'real' artificial intelligence will be prone to error just like we
are.  It also means that there will always be a place for 'normal',
deterministic, non-AI computation.

> In the domain
>    of artificial intelligence (or perhaps right outside it :-) are 
>    some people who argue that human intelligence can not be duplicated
>    or modeled because of the subtle but undeniable infusion of EVERY
>    detail of life into the decisions and thoughts of a moment. If
>    memory serves me, Penfield is a recent example of this group. 

Even if he is right, I do not see why we cannot design a software system to do
the same thing.  It would be very difficult, and it would take computers that
make Crays look like home computers to do it in real time, but it should be
possible.  So a complete cross-indexed associative memory would be needed,
so all decisions would have to be cross-checked with the memory, so all events
of concern to the computer would have to be digested, understood and indexed.
This is *not* impossible.

My main problem with most of these people is that they take differences
between mental processes and *current* computer technology and treat these
as intrinsic limnitation of computation.  *BULL*!   So far no mental process
that is well-understood cannot be simulated in a computer with sufficient
power.  I see no reason why the ones we do not understand should be any
different.

>    An interesting property, "P", that systems might have, would be
>    that they produce behaviors that are drawn from a well defined 
>    set of behaviors, even though the direct prediction of which
>    behavior is intrinsically intractable.
>     a set of neurons (esp real neurons) might not
>    be so easy to predict.  Neural "programming" is more a question
>    selecting alternative neural nets from a set of possible nets until
>    the bahavior happens to be the one desired.  Such a system might 
>    well have property P if it could be shown that the selection 
>    strategy was really the only way to get the desired behavior
>    with probability 1.  

You seem to be assuming that there is exactly one desired behavior, as if
intelligence required exactly matched behavior!  So a computer intelligence
would make a different decision than I would, it does not matter if the
decision was arrived at using 'intelligent' processes.  That is intelligence
is a broad *range* of systems, and the exact duplication of any one of them
is unnecessary to generate intelligence.  So all that would be necessary is
that the computer intelligence be based on a chotic system that is *similar*
to the one humans use.  And deterministic computer programs can produce true
chaotic behavior, my lock screen on my workstation is an example.

>    The selection stratgy for a neural net may bother some folks,
>    who feel they "design" nets. In the case of artificial nets, it
>    is probably true that the net can be apriori "designed" and then
>    built.  I would say that those systems do not have property P.
>    Natural nets and some synthetic nets, are often "trained"
>    which really means that a sequence of nets are produced, with the
>    sequence terminating when a net with the desired behavior is 
>    found.

You seem to be treating identically wired nets with differing weights as if
they were different nets.   I think this is probably not a useful approach.
Since in real neural systems the weights are constantly changing in response
to experience, this would lead to the strange result that my brain today is
a different set of nets than it was last year!  [Remeber, the basis of memory
in living neural systems is the changing of the connection strengths].

>    Perhaps my property P is what others are calling "emergence" and
>    I am just befuddled, or perhaps P is something else and I'm still
>    befuddled anyway.

I would say that P is the property of unpredictability, or intractibility.
It is only superficially similar to emergence, which *can* be predictable
if the structure of the system as a whole is taken into account.

>  If you would like to spend some time sharing
>    my befuddlement, consider whther there is a relationship between
>    systems with property P and problems which are NP complete.  In
>    each case it seems that there is not a way to "get inside" the
>    problem, and search  may be the only way to go.

They do seem similar.  And it is this inability to 'get inside' that makes
intelligence necessary for decision making.  If chaotic and NP complete
problems did not abound in nature simple analytic logic would always produce
the right answer, and model-based search strategies would not be needed.


Was I any help?????


Thanks for listening
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

BKort@bbn.com (Barry Kort) (10/15/90)

In article <58130@microsoft.UUCP> gaben@microsoft.UUCP (Gabe NEWELL) asks:

> What paradigm is currently available that can richly
> address subtleties of "I" "choose" and "know"?

In the case of humans, neuroscience has identified various structures in 
the brain which mediate behavior patterns.  Among the oldest and most 
primitive parts of the brain are the Limbic System and R-Complex.  These 
centers are the sources of primary instinctive drives, including 
self-preservation behaviors.  Mostly these are hard-wired, genetically 
inherited modes which operate outside the scope of conscious awarness.  In 
contrast, humans and other mammals have a large neocortex which is the 
center of learning and acquired behaviors.  At every stage of development, 
from infancy to old age, there are frontiers of learning in which the 
knowledge, skills, and wisdom of the neocortex struggle to overcome the 
primitive instincts of the subcortical regions.  It is easy to tell where 
these frontiers are:  it is precisely where one's emotions run high.  If 
you montior your emotions (neurotransmitter levels,  general body 
chemistry, signals from the sympathetic nervous system, and involuntary 
behavior patterns), you will recognize the frontiers between cortical and 
subcortical behaviors.

With learning (which is hard work), more and more of the body's behaviors 
comes under the regulation of the higher cortical centers.  In literature, 
the metaphors of Devil and Angel are often used to illustrate the tension 
between conscious behavior and subliminal drives.


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

rjf@canon.co.uk (Robin Faichney) (10/16/90)

In article <60045@bbn.BBN.COM> BKort@bbn.com (Barry Kort) writes:
>In article <58130@microsoft.UUCP> gaben@microsoft.UUCP (Gabe NEWELL) asks:
>
>> What paradigm is currently available that can richly
>> address subtleties of "I" "choose" and "know"?
>
>In the case of humans, neuroscience has identified various structures in 
>the brain which mediate behavior patterns.
[..]
>At every stage of development, 
>from infancy to old age, there are frontiers of learning in which the 
>knowledge, skills, and wisdom of the neocortex struggle to overcome the 
>primitive instincts of the subcortical regions.  It is easy to tell where 
>these frontiers are:  it is precisely where one's emotions run high.

You mean there's a precise line between emotions running high and emotions
not running high?  ;-)

>If 
>you montior your emotions (neurotransmitter levels,  general body 
>chemistry, signals from the sympathetic nervous system, and involuntary 
>behavior patterns),

How about the way you feel?

>..you will recognize the frontiers between cortical and 
>subcortical behaviors.
>
>With learning (which is hard work), more and more of the body's behaviors 
>comes under the regulation of the higher cortical centers.  In literature, 
>the metaphors of Devil and Angel are often used to illustrate the tension 
>between conscious behavior and subliminal drives.

Devil as libido, yes (I don't think "subliminal" was the word you
wanted here).  But surely the classic Christian paradigm is heaven
above, hell below, us in the middle?  So angelic or saintly behaviour
is an ideal towards which we are supposed to strive, not just whatever
we do consciously.  In reality, of course, there are many more "selves"
than the "beastly" and the "saintly", and monitoring your own emotions
is necessary but far from sufficient in learning to deal with them.
And forcing conscious control is far from being reliably beneficial.
CG Jung is probably the best source on the denizens of the collective
unconscious, which is what we are discussing here.

You know you CS types really should get yourselves an education!  :-)

JAHAYES@MIAMIU.BITNET (Josh Hayes) (10/17/90)

Let me toss in a quick comment wrt emergent properties in the
ecological realm. There has been some debate over the nature of
ecosystems and ecological communities, whether they exhibit
properties characterizable as "emergent".
 
The point of THAT debate, and perhaps of this one as well, is
to vitiate the reductionist approach: if there is some property
of the "system" which cannot be ascribed to any particular sub-
portion of the system, then the traditional approach of taking
it into its component parts will be unable to address that par-
ticular property.
 
For example, if the putative emergent property we are interested
in is consciousness, it may well emerge not simply from the various
brain regions/neural nodes/ganglia/etc., but from them and the
way they are themselves constructed and interlaced. Consciousness
could easily not RESIDE in any one location, but be a consequence
of the overall structure. There can be some minimum configuration
that exhibits the property of interest, and examination of smaller
selections will be of little or no value.
--------
Josh Hayes, Zoology Dept, Miami University, Oxford OH 45056
jahayes@miamiu.acs.muohio.edu, jahayes@miamiu.bitnet
Disclaimer: I'm a marine biologist, not a neuro-type. Just interested.

vinsci@soft.fi (Leonard Norrgard) (10/19/90)

Your wrote:
>[...]
>Seriously, though (and maybe I should say that I haven't looked at
>comp.ai in a couple of years), what could ever be sufficient evidence
>for machine consciousness?

How about first finding evidence that other humans are conscious? When
you have that, apply it to machines. (Hint: #1 is a little bit troublesome.)

fostel@eos.ncsu.edu (Gary Fostel) (10/20/90)

    Stanly Friesen stomped upon my comments about the term "emergence --
    to my mind missing the points I was trying to make -- and then
    wondered if he "was any help?"  Zounds!  I'm not sure my ego can take
    much more help like that.  Perhaps he (and others) mistook my 
    style for requests for external enlightenment, but I thought they were
    rhetorical questions at best.  Watching the flames grow higher on
    some recent posts perhaps I should be more vociferous.

    In dismissing the deeper implications of chaotic systems in the
    metaphysics of emergence, Friesen said:

        The properties of a chaotic system are inherent in the simplest
        description, they are simply unrecoverable.

    I'm not quite sure what the latter part is supposed to mean, but
    the former is clearly false (so the total statement must be true :-).
    I can think of several ways to describe subsets of a chaotic system
    in such a way that the properties of the system are gone.  For
    example, the first 25% of one of the equations describing the 
    system.  This is a simpler description.  Friesen is probably begging
    questions in his use of "simple" by assuming the operation applied 
    to a complete description to produce a partial description is in
    fact one which preserves the information he thinks is always 
    preserved.  It is a property of well understood sciences, that we
    know how to preserve information at different levels of description.
    I suspect "emergence" is only useful in poorly understood sciences
    and its usefullness is quite suspect since it may discourage the
    search for different formulations of the descriptions that DO
    allow the sort of "simplifying" that Friesen is assuming

    Friesen goes on but the most notable disagreement he and I have is
    his unwillingness to accept that his "brain today is a different
    set of nets than it was last year."  It should be patently obvious
    that his brain today most certainly IS different than last year.
    Identically wired nets with different weights are different.  I
    can only hope his nets will change some more!  (;-)

    There is a lot of question begging going on in this discussion, and
    perhaps that is unavoidable in this media, esp on this topic. For
    example, Leonard Norrgard defined fire as an emergent property of
    material, oxygen and heat.  Fire is supposed to be a property of
    the system that can not be explained by the subcomponents and is
    thus emergent.  Well ... but how is "fire" defined?  It is supposed
    to be a property but it would be quite hard to produce a definition
    of the property "fire" based on visual observations.  There are lots
    of different types of fires that look quite different.  And I want
    a definition of fire that excludes things that are not fire.  The
    only way I know to do this is by the classic reductionist scientific
    attitude that fire is what you see when material is oxidized at a 
    high rate in an oxygen atmosphere with sufficent activation energy
    to maintain or accelerate the process.  I think that fire is a 
    property that can't be defined WITHOUT recourse to the consituents.

    Chris Malcom, in a different post, dismissed Michael Bender's 
    query of "How can we build something we can't define" by claiming
    this was done all the time in research prototypes.  This is also
    sloppy thinking.  (It may well have been intended as a joke, but I  
think the point is serious.) Prototypes are not objects which 
    satisfy any definition; they are instances in search of a definition.
    If one is unable to define a property a system is supposed to have,
    and someone claims to have built a research prototype that has that
    property, there is no way to check that the property is there. 

    The purpose of science is (I thought) to search for descriptions
    of the world, to be offered in terms of defined quantities, and
    then to test those descriptions for adequacy.  Inverting this 
    process, as I suspect many "emergence advocates" may be doing,
    probably interferes with the progress of science -- though of course
    it may have pragmatic benifits. Skinner was able to train Pigeons
    without knowing much of what was going on in pigeons and I worry
    that emergence is an emergent property of a return to behaviorism.
    That sort of "science" has its place, especially in the creation of
    a new field, but if THIS is what emergence is all about why not
    just call it behaviorism?
    Apologies to behaviorists....

----GaryFostel----
                                      Dept of Computer Science
                                      N.C State University
  

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/21/90)

In article <1990Oct19.201604.7280@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

[stuff deleted]

>    It is a property of well understood sciences, that we
>    know how to preserve information at different levels of description.
>    I suspect "emergence" is only useful in poorly understood sciences
>    and its usefullness is quite suspect since it may discourage the
>    search for different formulations of the descriptions that DO
>    allow the sort of "simplifying" that Friesen is assuming
>
  Well!  You imply that traversing levels of description both up and
down are equivalent - I don't agree.  Moving 'up' a level of description
involves grouping several distinct 'things' at the lower level together
under one classification and therefore losing a degree of granularity -
and also the information that distinguishes the elements of the new
conception.  An example from linguistics is the phoneme - a group of
sounds which people regard as identical for most purposes of comprehension
although they are realized through *many* different physical sounds.  It
is certainly possible to deal with the phonemic phenomena by considering
each member of the group separately, but the number of these is truly
vast and makes the approach extremely cumbersome - and "not useful".  You
could think of like dealing with text bit-by-bit rather than as a
collection of words.  Note that phonemes and words really are groups
of smaller components, but are not really treated as such by people
at a psychological level - which is a legitimate concern of any science
of man, even if "poorly understood".

  In traversing the description levels downward, how do I preserve the
information that the sequence of characters "c","o","m","p","u","t","e",
"r" refer to a word (and by extension concept) "computer"?  By wrote?
Hardly "useful".  Then what do I do when I move down to the 'bit' level?
And further down?  In any reasonable sense, the 'word' "computer" only
exists at a high cognitive level, it can be 'reduced', but what do you
gain?

  I don't see why this sort of thing should be "discouraging" to scientific
inquiry, btw.  Certainly, like any other concept, it can be inappropriately
applied.

>    There is a lot of question begging going on in this discussion, and
>    perhaps that is unavoidable in this media, esp on this topic. For
>    example, Leonard Norrgard defined fire as an emergent property of
>    material, oxygen and heat.  Fire is supposed to be a property of
>    the system that can not be explained by the subcomponents and is
>    thus emergent.  Well ... but how is "fire" defined?  It is supposed
>    to be a property but it would be quite hard to produce a definition
>    of the property "fire" based on visual observations.  

  I agree that this is not a good stab at an "emergence".  In general,
I'm sure that finding a true emergent in an 'objective' model of physics
is going to prove rather difficult for anyone.  But since I'm not a
savant of physics (:>) I'll avoid putting my foot in here.

>    The purpose of science is (I thought) to search for descriptions
>    of the world, to be offered in terms of defined quantities, and
>    then to test those descriptions for adequacy.  Inverting this 
>    process, as I suspect many "emergence advocates" may be doing,
>    probably interferes with the progress of science -- though of course
>    it may have pragmatic benifits. 

  This sounds almost like you consider emergence a 'heresy' against
'science'.  What inversion do you mean here?  Do you find people
who describe observations before they reconcile them with established
science to all be interfering with science?  I would agree if you refer
to say, Ponds and Fleishmann, but not if you refer to Bohr.  Why, if 
we should deplore premature announcements, do we encourage researchers
to do this - the infamous "puplish or perish" paradox?  


>    Skinner was able to train Pigeons
>    without knowing much of what was going on in pigeons and I worry
>    that emergence is an emergent property of a return to behaviorism.
>    That sort of "science" has its place, especially in the creation of
>    a new field, but if THIS is what emergence is all about why not
>    just call it behaviorism?

  Errr, I think behavourism and emergence are antipodal opposites!  Skinner's
followers regarded all behavour as programmable by simple postive/negative
feedback or training, emergence deals with phenomena (or behavour if you
prefer) which is *not* obviously a simple consequence of a 'creature's
parts or input.  Does my idea of behavourism clash with yours?

--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

danforth@riacs.edu (Douglas G. Danforth) (10/23/90)

In <15238@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:


>behavior.  Since I can't see a blue sky here in Los Angeles, let me pick
>on the sea instead.  I would argue that there is no such thing as a spontaneous
>response to the sight of the sea.  We cannot avoid responding to the sea on the
>basis of any number of memories we have had, including books we have read,
>movies we have seen, and (particularly in my own case) music we have heard.
>(Vaughan Williams is forever with me.)  In other words whenever I react to
>the sight of the sea, my mind is VERY BUSY, indeed;  and if it were not busy,
>I would not be having that reaction.  If I detach my mind from the experience,
>the sea becomes just as "stupid" as Minsky's blue sky and Rembrandt collection
>(which I cannot look at without remembering that old Charles Laughton movie,
>just to take another shot at the same point).  The only thing which is
>troubling about Minsky's position is that it turns our attention away
>from any sense of aesthetic universals, but I would say that aesthetic
>theory has needed that kick in the pants for quite some time.

     It is necessary to be somewhat quantitative here (numbers!).  Any 
response that occurs within 1/2 second I would catagorize as being
spontaneous.  So if the first sight of the sea brings a positive feeling
(within 1/2 second) then I would argue that your behavior has become
"automatized" and is not governed by rational, logical, thought.  As one
muses on the sea then I would agree that memories, music, and books
come into play and enhance or modify the initial response.
     The "automatic" responses in animals and humans plays a fundamental
role in their behavior and determines which way they will "turn" in a 
multidimensional space of possible behaviors.  All of this is part of
being and "thinking".
     It is conceivable that someone else will have a negative response
(within 1/2 second) to the sight of the sea if extensive prior experience
caused pain or discomfort so that it becomes difficult to assign any
absolute measure of "goodness" to any specific situation.
     The point is that our pleasures and pains guide and direct our
paths of thought in ways that have very little to do with rational or
logical thinking.  Both facets (emotion,logic) work together to make us
"thinking" creatures.
--
Douglas G. Danforth   		    (danforth@riacs.edu)
Research Institute for Advanced Computer Science (RIACS)
M/S 230-5, NASA Ames Research Center
Moffett Field, CA 94035

cam@aipna.ed.ac.uk (Chris Malcolm) (10/24/90)

In article <1990Oct16.094931.8462@canon.co.uk> rjf@canon.co.uk writes:

[stuff about consciousness, emotions, and neurotransmitter levels deleted]

>You know you CS types really should get yourselves an education!  :-)

I always thought it a serious mistake see AI as part of CS :-)

	"Can you tell me how to get to [someplace]?"
	"Oh dear! If I were you I wouldn't start from here!"
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

fostel@eos.ncsu.edu (Gary Fostel) (10/24/90)

I wrote (among other things) that:

    It is a property of well understood sciences, that we
    know how to preserve information at different levels of description.
    I suspect "emergence" is only useful in poorly understood sciences
    and its usefullness is quite suspect since it may discourage the
    search for different formulations of the descriptions that DO
    allow the sort of "simplifying" that Friesen is assuming

Cameron Shelly (U of T) replied:

    Well!  You imply that traversing levels of description both up and
    down are equivalent - I don't agree. 

and later added:

    In traversing the description levels downward, how do I preserve the
    information that the sequence of characters "c","o","m","p","u","t","e",
    "r" refer to a word (and by extension concept) "computer"?  By wrote? [sic]
    Hardly "useful".  Then what do I do when I move down to the 'bit' level?
    And further down?  In any reasonable sense, the 'word' "computer" only
    exists at a high cognitive level, it can be 'reduced', but what do you
    gain?

                   -  -  -  -  -  -  -  -  -  -  -

    Well, I don't think I implied any such thing, and I doubt the thing I am
    supposed to have implied even has a well defined meaning in the philosophy
    of science.  Certainly, the material which Shelly supplied after that on
    some details on the morphology of lingusitics did nothing to make me feel
    ashamed of whatever it was I implied.  I wonder what Shelly means when he
    says that something is "hardly useful".  Useful towards what goal?   And
    what on earth does it mean when he says that "in any reasonable sense"
    a word "only exists at a high cognitive level".   I do not understand, and
    I do not think it is because he is speaking Swahili, but rather newspeak.
    The mature sciences tend to have their technical terms defined in words
    that are not taken from daily usage (e.g. latin) and there is good reason
    for this.  People are less likely to fall into loose habits when using a
    term like "metastisize" than "consciousness" or "reasonable" or "useful".

    Later in the same note, Shelly gives me the point I was trying to make,
    so perhaps I should not drag this out.  He said:  

       In general,
       I'm sure that finding a true emergent in an 'objective' model of physics
       is going to prove rather difficult for anyone.  But since I'm not a
       savant of physics (:>) I'll avoid putting my foot in here. 

    Indeed, the reason it may be hard to find examples of emergence in 
    "objective" models in physics is that it is a mature science and has
    worked out descriptions that make the emergence go away.

    My concern is that emergence is being adopted as a terminal position in
    the undertanding of phenomona (sp?) when in fact it is a very early, and
    very weak position.  Clearly, one needs to gather observations and one
    needs to organize them prior to making the major progress towards a 
    "proper" science ... but that gathering and labeling should not begin
    to masquerade as a proper science.  Perhaps it is only sloppy net talk that
    makes the use of "emergence" seem to be as I observe, but perhaps it goes
    deeper.  I think the behaviorists made this mistake and it would be a
    shame to see another generation of scientists follow that lead.

----GaryFostel----
					Department of Computer Science
					North Carolina State University

burley@world.std.com (James C Burley) (10/24/90)

In article <1990Oct23.165301.9813@riacs.edu> danforth@riacs.edu (Douglas G. Danforth) writes:

	The point is that our pleasures and pains guide and direct our
   paths of thought in ways that have very little to do with rational or
   logical thinking.  Both facets (emotion,logic) work together to make us
   "thinking" creatures.
   --
   Douglas G. Danforth   		    (danforth@riacs.edu)

I might agree that pleasures and pains guide and direct our thoughts in
ways that have very little to do with CONTEMPORARY rational or logical
thinking -- that is, any such thinking occurring at the instant when a
given pleasure- or pain-response event occurs.

However, it seems to me that we continually use our rational and/or logical
thought to train our reactions for different responses (pleasure, pain,
and so on) in future "instant-reaction" situations.  For example, I'm told
that very few people actually like anything about the first cigarette they
smoke, but whatever pressures -- peer, for example -- that govern their
(semi-)rational thought at the time cause them to begin the process of
retraining their responses, over time, to the point where the instantaneous
response to a cigarette is pleasure.  (Or consider Minksy's "curves" --
perhaps trained by watching bikini-clad models on TV beer commercials! :-)

Hmmm...perhaps one way to distinguish a particular level of intelligence
in "wetware" (machines that must substitute some kind of quick response in
the responding to a stimulus for a more complete analysis of which they are
capable of in less demanding situations) is whether a being shows the
ability to recondition its own reactions using whatever means are available
to deal with a situation it recognizes through analysis.

(I'm not going to claim this is a test for intelligence or consciousness or
self-awareness...just that it might be an interesting test.)

For example, if I know that I must walk over a walkway of, say, hot coals
surrounded by deadly molten lava to reach a goal I consider sufficiently
desirable (say, survival), I am capable of deciding that my normal response
to heat -- jumping off of it (and thus into the molten lava) -- is inadequate
and must be readjusted, and I am also capable of exposing my feet to very
hot substances (say the coals at the beginning of the walkway) to recondition
my responses (or at the very least, render my feet incapable of any feeling)
so I don't jump off when I make the walk.  Given looser constraints, I of
course might make shoes, and given tighter constraints (such as insufficient
time for such training), I might be capable of simply suppressing the
reaction while I run over the coals.  But other examples may be constructed
where it is necessary for the being to truly retrain its responses to a
stimulus and not simply inhibit them.

Now, if I was a scientist and tested a human being and it passed, I'd
consider that a "control" in an experiment.  (Note that distinguishing
capability from the particular being's willingness to use that capability
in a particular situation is a weak aspect of this theory, and any experiment
should strive to deal with that weakness as best as possible -- though,
theoretically, ants might be able to so retrain their responses if necessary,
yet we might never know just what conditions must be present for an ant to
decide to use this capability.)

If I then tested a chimpanzee, I'd be interested if it passed, as I would for
a dolphin (using a modified form of the test, of course).

But if a dog showed the ability to recognize such a situation and decide to
retrain itself (even though the situation might have to be simplified or
changed to be recognizable to the dog -- I'm only trying to prove some kind
of "conscious" retraining of its own reactions), I'd be VERY impressed with
the dog or dogs in general.

I fully expect that life forms considered to be of less intelligence than,
say, a dog or cat, would be incapable of passing the experiment under
any situation, and, if that was the only other choice, would (effectively)
choose death.

On the other hand, I'm a firm disbeliever in the "disease" theory of
alcoholism and other isms/habits, since I know (as does anyone else who cares
to research the subject) that it has been disproved that a human being cannot
"cure" him or herself of such a habit without some kind of medical or
external attention.  For example, I believe that humans could pass that
test that white mice fail, the one where they push a button that stimulates
a pleasure center of the brain until they die of thirst or starvation, even
though food and water are nearby (directly or accessible simply by pushing
another button).  Whether a human could pass the test with no prior "warning"
is a separate question from whether it could pass having observed the test's
effect on a white mouse and understood the implications, and having been
given time to recondition itself if it so chose.  (Observing how a human
reconditions itself to break away from constantly stimulating itself to
get some munchies would be interesting in itself!)

So while we may accept the limitation that in a given situation, we respond
to a stimulus in a fashion not directly governed by our capacity for
rational or logical thought, I think it is going too far to extend this
concept to include the limitation that we cannot in any way employ our
"higher" capacities in the training of our own responses.

(Be aware that I also do not believe man is ultimately composed entirely
of atomic particles, i.e. I don't believe creation is described entirely in
matter.  This might appear to some to affect how I view any attempt to
reproduce man's intelligence via a machine.  However, I do find the whole
field fascinating, worth pursuing, and highly revelatory, both from a
scientific standpoint and a philosophical one, so I feel perfectly comfortable
participating in the process.  And since I no more fully understand the
implications of my own beliefs than do any others the implications of theirs --
for example, the belief that a man is entirely describable and thus
reproducible at a material, or atomic/subparticulate, level -- I have at least
as much to learn from these discussions as anyone else.)

James Craig Burley, Software Craftsperson    burley@world.std.com

cam@aipna.ed.ac.uk (Chris Malcolm) (10/26/90)

In article <1990Oct19.201604.7280@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

>    Chris Malcom, in a different post, dismissed Michael Bender's 
>    query of "How can we build something we can't define" by claiming
>    this was done all the time in research prototypes.  This is also
>    sloppy thinking.  (It may well have been intended as a joke, but I  
>think the point is serious.) Prototypes are not objects which 
>    satisfy any definition; they are instances in search of a definition.
>    If one is unable to define a property a system is supposed to have,
>    and someone claims to have built a research prototype that has that
>    property, there is no way to check that the property is there. 
>
>    The purpose of science is (I thought) to search for descriptions
>    of the world, to be offered in terms of defined quantities, and
>    then to test those descriptions for adequacy.

That is a simplified rational reconstruction of science, and is a
suitable slogan for keeping teenage wannabee scientists in some kind of
order. Since it is also communicatively efficient, scientists, just as
do mathematicians, endeavour to present their results in accordance with
this paradigm; and, just as with mathematicians, it bears little
resemblance to the way they actually work.

For example, this news group has recently been full of complaints and
suggestions from people who -- motivated by precisely the view of
science you present here -- would like all AI research to stop until
someone has found a good definition of intelligence. When I said that in
constructing research prototypes we often found ourselves building
things we couldn't define I was perfectly serious. It is not uncommon
for a good research prototype to be the subject of heated debate over
precisely what it exemplifies for a number of years. The final
interpretation is sometimes -- even in the eyes of the inventor -- quite
different from the original intention. One of the distinguishing
features of a good research prototype is just this long-term
fruitfulness.

There is a great deal more to constructing a fruitful research prototype
than "to search for descriptions of the world, to be offered in terms of
defined quantities, and then to test those descriptions for adequacy."
I'm sorry that I can't specify exactly what it is, but I know it when I
see it!
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

fostel@eos.ncsu.edu (Gary Fostel) (10/27/90)

I said:

    The purpose of science is (I thought) to search for descriptions
    of the world, to be offered in terms of defined quantities, and
    then to test those descriptions for adequacy.

And Chris Malcom, at Edinbourough, replied:
  
    That is a simplified rational reconstruction of science, and is a
    suitable slogan for keeping teenage wannabee scientists in some kind of
    order. Since it is also communicatively efficient, scientists, just as
    do mathematicians, endeavour to present their results in accordance with
    this paradigm; and, just as with mathematicians, it bears little
    resemblance to the way they actually work.

Perhaps I am still a teenage wannabe scientist.  I am rather familar with
the work habits of "scientists" -- the good and the bad.  I had thought I
was talking about an issue of how one ought to proceeed, not how people
often do, for a variety of oft pressing pragmatic reasons.  There was a
time when scientists studied the philosphy of science as part of their
training; this is less common these days. It shows. 

Malcom continued:

    For example, this news group has recently been full of complaints and
    suggestions from people who -- motivated by precisely the view of
    science you present here -- would like all AI research to stop until
    someone has found a good definition of intelligence.

The fact that someone may misapply a basic principle of the scientific
method is not an invalidation of the principle.  Perhaps that is how these
people drew this conclusion.  I saw only a few remnants of rubble of that 
debate but if these "people" really said what Malcom thinks they said, they
clearly misapplied the principle.  I wonder if they think they said what
Malcom thinks they said.  If AI people are "scientists" than there may well
be some serious methodological weaknesses in current work -- weaknesses 
that are perhaps strengths if one relables the method.  Bad technique in 
science may be excellent philosophy, mathematics, or, esp. engineering.
A few words, brutally quoted out of context from Malcom's note, 
help make my point: "constructing", "building", "inventor". These are
terms drawn from engineering, not science.  

There are sciences, especially biological, in which enourmous effort is
put into "building a system" in order to use it to study a problem. However,
once the system is built, it is subjected to rigourous examination,  
usually called "characterizing" the system so that the REAL work of science
can then be done: performing well controlled experiments in a well understood
setting to prove or disprove theories about the elements involved in that
system.  The situation in AI ... and other branches of Computer "Science"
as well ... is quite different, as Malcom confirms when he says:
 
    It is not uncommon
    for a good research prototype to be the subject of heated debate over
    precisely what it exemplifies for a number of years. The final
    interpretation is sometimes -- even in the eyes of the inventor -- quite
    different from the original intention. One of the distinguishing
    features of a good research prototype is just this long-term
    fruitfulness. 

The "long term fruitfullness" of the "research prototype" comes from the
"heated debate" not from the specific well controlled experiments performed
using this prototype.  The difference is quite clear.  This is not to say
that everyone doing AI should pack it up, but it does suggest that if AI
comes under criticism for not being a proper scientific discipline, AI
people should not deny that such is the case.  If someone suggests that 
more rigourous scientific methods might provide long term benifits to the
field, this should not be refuted by claiming that such methods are already
being applied.  That just makes it look as if the practisioners do not
understand what it is they are being advised to consider.  Rather, it might
be the case that the scientific method simply can not be applied or would
not be fruitful, and that can be the basis for defending the status quo in AI.

But, of course, it might also be the case that those methods would help.  
  
----GaryFostel----                        Department of Computer Science
                                          North Carolina State University

fostel@eos.ncsu.edu (Gary Fostel) (10/27/90)

A nice capsule statement about the "value" of emergents was recently
posted by Minsky, at MIT:
   
   The amazing thing is how rarely anything resembling an "inexplicable
   emergent" has ever reigned for very long in the history of Science --
   except for transient periods before better theories remove the need
   for the assumption of extra, special laws.  The moral is that,
   whenever you're pretty sure you are dealing with a "genuine emergent",
   you're proably making a mistake

I was discussing the idea of emergence with a physicist recently and 
something he said rings very true with this view of emergence.  He suggested
that gravity might be considered an emergent property of collections of
elementary particles.  In fact the "inexplicable emergent" property
of gravity and the effort to make it more explicable has been a dominant
influence on the direction of 20th century physics. 

"Scientists" generally do not like emergent properties and tend to devote 
their lives to eliminating them. 

----GaryFostel----                       Deptartment of Computer Science (?)
                                         North Carolina State University

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/27/90)

In article <1990Oct26.220658.11281@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>
>I was discussing the idea of emergence with a physicist recently and 
>something he said rings very true with this view of emergence.  He suggested
>that gravity might be considered an emergent property of collections of
>elementary particles.  In fact the "inexplicable emergent" property
>of gravity and the effort to make it more explicable has been a dominant
>influence on the direction of 20th century physics. 
>

Well, at least to first order -- that is, in Newtonian physics --
gravity is a simple property of single particles; a simple
inverse-square attractive force.  So there is nothing new when you
have larger collections of particles - elementary or otherwise.  We
usually use the term "irreducible" for such phenomena, rather than
"emergent" because nothing new happens (once we have the superposition
theory) with larger collections of particles.

To be sure, the gravitic phenomena in general relativity do not add in
such a linear fashion, but (as I understand the situation, which is
not very deeply) there is no reason to suppose any permanent mystery
once the correct field equations are found.  I'm not saying that
gravity is understood, only that there is no reason to think that
basically new phenomena arise with large collections of particles.

In Eddington-style "fundamental theories", the gravitational constant
is indeed a function of the total number of particles in the universe.
(These theories have not stood the "test of time", for what that's
worth.)  So you might consider that an "emergent", of a particularly
uniform and "simple" kind.

aboulang@bbn.com (Albert Boulanger) (10/27/90)

In article <3499@media-lab.MEDIA.MIT.EDU> minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes:

   In quantum electrodynamics as well, I have the impression that, again,
   there are no mysterious emergents, in the sense that the two-at-a-time
   exchange-interactions account for everything. However, each exchange
   implies a new particle, and you have to include the two-at-a-time
   interactions of all of these, hence the annoying infinite series.
   Also, now things are a little different, for many-particle problems,
   because the equations can no longer be solved within a manifold of
   fixed dimension, because they require , not in a low-order vector
   space they require the dimensionality of at least the
   configuration-space.  Despite all that complexity, however, one still
   feels that the predictions come directly, albeit in a complicated
   manner, from one's understanding of the elemetary particules and their
   local interactions.  No mysterious emergents.


I guess you are right, but the 1/r^7 Casmir potential always seemed
bizarre to me ;-). (This is a force between two plates that are
explained by the vacuum fluctuations. A recent result is that light
will travel faster between two such closely spaced plates.) You know,
QM does not support chaos (it tries real hard as witnessed by studies
in quantum chaos, but does not quite hack it). QED is an uninteresting
domain to look for emergent properties because of the essential point
below:


Ahem, now to the real point. The term 'emergent' is one that has been
used in experimental *non-linear* science (the land o' chaos) and the
significance of what it implies is reduced if taken out of that
context. For instance, the principle of superposition does not work in
nonlinear systems in general. This is a very important fact. Simple CA
systems (in general they have nonlinear local interactions dictated by
table lookup), like the life game, Ising systems, and the HPP lattice
gas, (where people were initially surprised that spherical waves would
emerge from a hexagonal lattice) are nice contexts the examine this
issue of emergent properties. Again, 4-wave-mixing in photorefractive
crystals (October's Scientific American has an article on
photorefractive crystals) and video feedback are example physical
nonlinear systems to use as contexts. It is the fact that
superposition does not work in nonlinear systems in general that has
given birth to *experimental* mathematics. Of course, after the
emergence, the reason for the emergence can be obtained, but this is a
pretty trivial point if you ask me.


Not just a linear superposition,
Albert Boulanger
aboulanger@bbn.ocm

cam@aipna.ed.ac.uk (Chris Malcolm) (10/29/90)

In article <1990Oct26.214354.11063@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>I said:
>
>    The purpose of science is (I thought) to search for descriptions
>    of the world, to be offered in terms of defined quantities, and
>    then to test those descriptions for adequacy.
>
>And Chris Malcom, at Edinbourough, replied:

(Chris Malcolm at Edinburgh, actually.)

>    That is a simplified rational reconstruction of science ....
>    it bears little
>    resemblance to the way they [scientists] actually work.
>
>Perhaps I am still a teenage wannabe scientist.  I am rather familar with
>the work habits of "scientists" -- the good and the bad.  I had thought I
>was talking about an issue of how one ought to proceeed, not how people
>often do, for a variety of oft pressing pragmatic reasons.

You mistake the seriousness of my point. I did not mean that scientists
for "oft pressing pragmatic reasons" take short-cuts and fail to conform
to the proper letter of the scientific method, but rather that only a
limited amount of science can actually be done according to your
description (above). Your description applies to a particular phase in
a particular kind of science, e.g., looking for a new planet,
hypothesized according to the Newtonian paradigm to explain observed
orbital perturbations.

>There was a
>time when scientists studied the philosphy of science as part of their
>training; this is less common these days. It shows. 

Ah, so you didn't recognise the Lakatosian undertones behind what I was
saying? Of course not, since your initial description (quoted above)
sounds a rather Baconian or Humean view of science -- though that is
perhaps a cruel generalisation from a rather small sample, which could
be generously read as Popperian.

>If AI people are "scientists" than there may well
>be some serious methodological weaknesses in current work -- weaknesses 
>that are perhaps strengths if one relables the method.  Bad technique in 
>science may be excellent philosophy, mathematics, or, esp. engineering.
>A few words, brutally quoted out of context from Malcom's note, 
>help make my point: "constructing", "building", "inventor". These are
>terms drawn from engineering, not science.  

Don't be silly. They are terms which apply in general to making things.
What distinguishes engineers from scientists is not WHAT they make but
WHY they make it. Some of the largest and most expensive "constructions"
in the world were "invented" and "built" by physicists to study
sub-atomic particles.

>There are sciences, especially biological, in which enourmous effort is
>put into "building a system" in order to use it to study a problem. However,
>once the system is built, it is subjected to rigourous examination,  
>usually called "characterizing" the system so that the REAL work of science
>can then be done: performing well controlled experiments in a well understood
>setting to prove or disprove theories about the elements involved in that
>system.  The situation in AI ... and other branches of Computer "Science"
>as well ... is quite different,

AI is a relatively young discipline, and has not yet become properly
institutionalised in the bricks and mortar of our Universities, and so
it happens that it is pursued in such convenient host departments as CS,
EE, Philosophy, Linguistics, Psychology, etc.; but if you, Gary, really
seriously think that AI is _properly_ a part of Computer Science then
I'd be interested to hear why.  For my part I think considering AI to be
a branch of CS is as silly as considering Astronomy to be a branch of
Optics.

>Malcom confirms [Gary's view that AI is not a science] when he says:

>    It is not uncommon
>    for a good research prototype to be the subject of heated debate over
>    precisely what it exemplifies for a number of years. The final
>    interpretation is sometimes -- even in the eyes of the inventor -- quite
>    different from the original intention. One of the distinguishing
>    features of a good research prototype is just this long-term
>    fruitfulness. 
>
>The "long term fruitfullness" of the "research prototype" comes from the
>"heated debate" not from the specific well controlled experiments performed
>using this prototype.  The difference is quite clear.  This is not to say
>that everyone doing AI should pack it up, but it does suggest that if AI
>comes under criticism for not being a proper scientific discipline, AI
>people should not deny that such is the case.

Unless of course the criticism comes from people with a naively
Procrustean view of what properly constitutes a science.

>If someone suggests that 
>more rigourous scientific methods might provide long term benifits to the
>field, this should not be refuted by claiming that such methods are already
>being applied.

Well, I don't see why it shouldn't; but as it happens I'm refuting it by
attacking your view of science.

>Rather, it might
>be the case that the scientific method simply can not be applied or would
>not be fruitful, and that can be the basis for defending the status quo in AI.

Here you seem to suggest that there may actually be domains of enquiry
better investigated by methods other than scientific! Really?  Does this
not rather suggest a view of the scientific method so narrowly dominated
by the particular practices of a few sciences as to condemn those too
different to the status of non-sciences?  Some psychologists were
foolish enough to heed the advice of such narrowly educated philosophers
of science, and as a consequence nearly flushed their discipline down
the toilet in a spasm of physics envy.

Do you, Gary, really think that there are matters-of-fact in the
Universe to the discovery of which other methods than the scientific are
best fitted?
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

JAHAYES@MIAMIU.BITNET (Josh Hayes) (10/30/90)

I seem to be hearing that many out there believe there is no such
thing as emergence, reasoning that (paraphrased) "if we REALLY
and FULLY understood the system and property in question, we would
understand how that property arises, therefore it would no longer
be emergent". This seems to me to miss the point (and is probably
an unfair characterization, to boot, but where would we be without
rhetoric? :-) ), which OUGHT to be that systems describable as
hierarchical in some sense can have properties which "reside" at
particular levels in the system, or are observable only when the
system achieves some minimum level of complexity. Again, I harp
on ecosystem theory for examples. Ecosystems have properties, or
descriptors, which are meaningless when applied to the parts of
the system, or even to the collection of the parts without the
structure inherent in the system. That these properties are of
the ecosystem itself, and not just a conglomerate of some property
shared by its parts, is a non-trivial point, and I must add, one
which is not taken as proven in this case....
 
With respect to AI, I missed the initiation of this thread, but
I suspect it had something to do with consciousness and where
consciousness resides. The idea that a complete understanding of
the brain, mind, and the relationship between the two, would render
the question answerable does not vitiate the fact that consciousness
is not resident in individual neurons (or is it?), but somewhere
in there, as complexity accrues, it shows up. That is an important
property in human, or at least biological, cognition. Whether it is
necessary to be applied to AI systems is to me unclear. I have a
couple of readings that I might suggest to go along with this, but
I'll have to rummage through my reprint files for a bit...stay tuned.
 
Josh Hayes, Zoology Department, Miami University, Oxford OH 45056
voice: 513-529-1679      fax: 513-529-6900
jahayes@miamiu.bitnet, or jahayes@miamiu.acs.muohio.edu
"Ain't nothin' worth nothin' that ain't no trouble."
                         --unidentified gardener, Austin, TX

fostel@eos.ncsu.edu (Gary Fostel) (10/31/90)

In a previous note, I said:

   I was discussing the idea of emergence with a physicist recently and 
   something he said rings very true with this view of emergence.  He suggested
   that gravity might be considered an emergent property of collections of
   elementary particles.  In fact the "inexplicable emergent" property
   of gravity and the effort to make it more explicable has been a dominant
   influence on the direction of 20th century physics. 

To which Minsky, at MIT replied:

   Well, at least to first order -- that is, in Newtonian physics --
   gravity is a simple property of single particles; a simple
   inverse-square attractive force.  So there is nothing new when you
   have larger collections of particles - elementary or otherwise.  We
   usually use the term "irreducible" for such phenomena, rather than
   "emergent" because nothing new happens (once we have the superposition
   theory) with larger collections of particles.

The trouble with this dismissal of the "emergence" of gravity is that 
the dismissal is predicated upon a theory that has never been demonstrated.
I thought we were discussing observed properties of systems.  Unless I am
mistaken, there has never, ever, been a demonstration of the existence of
a gravitational field due to a single particle.  Not only is this far beyond
the sensitivity of current instrumentation, but in fact depending on the
theory of gravitation you like, it might not be true anyway. 

Minsky dismissed gravity as an emergent property of collections of 
particles by presuming a theory that may explain the  observations; that
seems to be a bit of a cheat.  As Minsky noted himself recently,
"emergence" is a three fold thing, involving the objects,
the observations, and the apriori understanding of the observer. 

There *are* new things that happen when there are collections of particles:
gravity can actually be observed, rather than theorized.  The emergent
property called gravity is fairly well understood, i.e. inverse square
laws etc, in a behavioral sense,  but just why it emerges remains an 
unsolved question -- or one might say "an inexplicable emergent".

My own low esteem for the term "emergent" is related to the sort of
trouble gravity caused: people may begin to think that descriptions
of behavior constitute an explanation of the behavior.  This is especially
true when the observer is quite similar to the objects being observed,
as will often be true in cognitive science or Artificial Intelligence.

----GaryFostel----                          Department of Computer Science
                                            North Carolina State University 

fostel@eos.ncsu.edu (Gary Fostel) (10/31/90)

In previous notes, I said lot of things, mainly supporting the value of a
traditional view of what is or is not "science".  Malcolm, at Edinburgh,
said many things defending the relatively looser (flexible?) view of
what is or is not science.  These notes will grow in size exponetially
if I try to requote all the place he quoted me and his comments and then
add my comments. 

Apparently I'm either a Humean (close enough to "human" that I like it), 
or a Baconian, (a bit to porcine) and maybe even a Popperian (is this
a reference to the Marvel Omniverse Planet?)  Nor was I able to recognize
the Lakatosian underpinnings of Malcolm's opinions.  (I believe all of this
was my punishment for suggesting that a good grounding in the Philosophy
of Science would help people doing AI work.)  I don't recognize Lakatose,
but I'll bet a pint (if I ever make it to Edinburgh or vice versa) that he or
she did their writing and thinking around the time of WW II or just afterward
when supposed intellectual underpinnings were created to support the rise
of the "soft" sciences.  

Malcolm suggests that it was a narrow minded view of science that caused
psychologists to "nearly flush their discipline down the toilet in a spasm 
of physics envy."  (wonderful mixed metaphores there:-)  I think the blame 
belongs not to those who tried to maintain a coherent definition of science, 
but to the psychologists who were insecure about their methods and results.
Historically, this was a time when "science", especially physics,  was
in full bloom having just helped win the war,  and there was plenty of envy
in other fields.

Malcolm reveals the same chip-on-the-shoulder attitude when he asks about the
shortcommings of "a view of the scientific method so narrowly dominated
by the particular practices of a few sciences as to condemn those too
different to the status of non-sciences?"  Well, first of all, it is not
restricted to the practises of a few sciences, but there are a few fields of
enquiry in which the methods happen to be "scientific".  He begs the question
in his statement of it.  But the real point is that not being a science is
not "condemning" anything.  It just means it is not a science.  Lawyers are
not troubled to be in a non-scientific field, nor are doctors, artists, 
engineers and a great many other respectable people.  What is so bad about
not being a science? 

This has gotten too long again, but Malcolm asked me two specific questions
which I will address, namely do I believe: 

   that AI is _properly_ a part of Computer Science [he continued] then
   I'd be interested to hear why.  For my part I think considering AI to be
   a branch of CS is as silly as considering Astronomy to be a branch of
   Optics.

and 

   that there are matters-of-fact in the
   Universe to the discovery of which other methods than the scientific are
   best fitted? 

The first seems to be a variation on the "When did I stop beating my wife"
classic.  The realtive "size" and "scope" of astronomy and optics as the 
question is posed is absurd and arguably insulting to astronomers and by
analogy, to Computer "Scientists".  (Also not a science.)  The second is
not so crudely baited a hook, and much depends on what a "matter of fact" 
is, but I'll bite and claim that Mathematics is an example where matters
of fact are discovered by non-scientific means.  Are there other examples?
Well, yes ... for example ... AI ....    (Sorry, couldn't resist.)

----GaryFostel----                     Department of Computer Stuff
                                       North Carolina State University

csmith@cscs.UUCP (Craig E. Smith) (10/31/90)

In <1990Oct31.001104.22908@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:


>In previous notes, I said lot of things, mainly supporting the value of a
>traditional view of what is or is not "science".  Malcolm, at Edinburgh,
>said many things defending the relatively looser (flexible?) view of
>what is or is not science.  

In the general sense of the word, a "science" is simply any subject
which can be systematically studied in a logical manner, and the
related body of knowledge generated by that study. In most cases,
the decision of whether or not to call a particular study a science
is primarily a political consideration, and is largely based on whether 
one accepts that the field can be systematically, and logically 
investigated.


-- 
--------------------------------------------------------------------------
  If you want a picture of the future,  | Internet:     csmith@cscs.UUCP 
  imagine a boot stomping on a human    | UUCP:     ... uunet!cscs!csmith 
  face - forever.  - George Orwell      |---------------------------------

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (11/01/90)

In <1990Oct30.220248.20784@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

>The trouble with this dismissal of the "emergence" of gravity is that 
>the dismissal is predicated upon a theory that has never been demonstrated.
>I thought we were discussing observed properties of systems.  Unless I am
>mistaken, there has never, ever, been a demonstration of the existence of
>a gravitational field due to a single particle.  Not only is this far beyond
>the sensitivity of current instrumentation, but in fact depending on the
>theory of gravitation you like, it might not be true anyway. 

True, but the point may have been that the physicist spoke of gravity
"emerging" from a complex system of particles (much as some cognitive
scientists claim "intelligence" emerges from a complex system of
neurons).  Minsky pointed out that this property exists for just two
particles.  The assuption is that two particles, like two neruons,
are not a complex enough system to provide "emergence" (just what is
complex enough is never addressed by the emergites).  Hence whatever
the property is, it's not emergent.

At least, that's how I read his argument as you posted it.

- Jim Ruehlin

n025fc@tamuts.tamu.edu (Kevin Weller) (11/01/90)

In article <90302.163701JAHAYES@MIAMIU.BITNET> JAHAYES@MIAMIU.BITNET (Josh Hayes) writes:

>   I seem to be hearing that many out there believe there is no such
>   thing as emergence, reasoning that (paraphrased) "if we REALLY
>   and FULLY understood the system and property in question, we would
>   understand how that property arises, therefore it would no longer
>   be emergent". This seems to me to miss the point (and is probably
>   an unfair characterization, to boot, but where would we be without
>   rhetoric? :-) ), which OUGHT to be that systems describable as
>   hierarchical in some sense can have properties which "reside" at
>   particular levels in the system, or are observable only when the
>   system achieves some minimum level of complexity. Again, I harp
>   on ecosystem theory for examples. Ecosystems have properties, or
>   descriptors, which are meaningless when applied to the parts of
>   the system, or even to the collection of the parts without the
>   structure inherent in the system. That these properties are of
>   the ecosystem itself, and not just a conglomerate of some property
>   shared by its parts, is a non-trivial point, and I must add, one
>   which is not taken as proven in this case....
>
>   With respect to AI, I missed the initiation of this thread, but
>   I suspect it had something to do with consciousness and where
>   consciousness resides. The idea that a complete understanding of
>   the brain, mind, and the relationship between the two, would render
>   the question answerable does not vitiate the fact that consciousness
>   is not resident in individual neurons (or is it?), but somewhere
>   in there, as complexity accrues, it shows up. That is an important
>   property in human, or at least biological, cognition.

Yes!  After studying the problem on my own, I have come to agree with
your interpretation of emergence.  My original ideas on this were
rather crude; please forgive me for that, since I was merely
conducting a preliminary exploration of the topic.  I'm pursuing
something somewhat different (but related) now, so it may not be all
that refined yet either.

It has long been a problem of materialistic philosophies of mind to
explain how a living organism could possibly result from the assembly
of non-living atoms (or perhaps more to the point, how a conscious
entity could arise from a set of non-conscious neurons).  Emergence
(the idea) was introduced to make materialism consistent with itself
(i.e., to render it *non-self-contradictory*), NOT to disprove it!
Without it, we are left with an absurdity.  Mind you, all this proves
is that materialism is *possible*, NOT that it's true (I think it is,
in a way; more to come...).

Now for my next trick, I'd like to try out a little argument on you
all to demonstrate the absurdity of total reductionism.  Be aware that
an equivalent argument can be presented against total holism, so what
a reasonable person should be after is the right balance for what
he/she happens to be studying at a given time.

It has long been a premise of reductionistic materialism that material
things are the only things really existing, everything else being mere
composites best understood in terms of the matter making it all up.
Does everything make sense only in terms of parts?  Or more to the
point, can consciousness be said to exist at all in a materialistic
universe, or must it be understood properly only by referring to those
parts that produce an illusion of consciousness?  I have been told
that neurons are more real than conscious states because they are more
concrete (material) (even though it's pretty obvious to me that
consciousness, however we might describe it, is real since I
experience it quite directly).  Well, if we must always reduce to
REALLY explain anything, let's start by breaking down the central
nervous system into ever-smaller parts.  As we pass from one
architectural level to the next more fundamental one, we eventually
come to the neuron.  Can we say we REALLY explain consciousness when
we fully understand neurons?  But neurons can be reduced even further,
so why stop now?  We can reduce through the various levels of neuron
structure until we come to, say, a complete understanding of the
molecules making up a neuron.  Now do we REALLY understand
consciousness?  Not according to the reductionist hypothesis, since we
can render molecules asunder and examine atoms, then subatomic
particles, then ... per modern micro-physics, we don't ever come to
anything truly "elementary," but we do arrive at a description-level
below which size and the idea of "parts" have no meaning.  Hence, we
can't REALLY explain anything in a completely reductionistic way as we
can't reduce to any ultimately fundamental description, yet we are
told that the more fundamental, the more real.  If nothing exists but
for its parts, then NOTHING EXISTS: not brains, not neurons, nor
anything else!  Here we have used an argument from reductionism to
disprove reductionistic materialism, QED.

I think this whole idea makes some committed reductionists
uncomfortable because it shows that conscious phenomena are just as
valid as brain phenomena.  This seems to some like a form of dualism,
but it might be more accurately termed a double-aspect theory of mind
since there are (at least) two aspects of the *same* events (mental
and neurological) rather than two separate *substances*.  The need for
a separate mental substance disappears (sort of like the need for
phlogiston to explain combustion or caloric to explain heat) once we
begin to realize that a single system can have many descriptions.  An
even better name might be indefinite-aspect theory, since there is an
indefinite number of ways to describe a given system, none of which
has any more claim to validity than any of the others.  This is one
reason why we have so many different scientific disciplines addressing
different levels of abstraction (four general examples presented from
more fundamental to more abstract: physics, chemistry, biology,
psychology).

-- Kev

cpshelley@violet.uwaterloo.ca (cameron shelley) (11/01/90)

In article <1990Oct31.102704.18335@cscs.UUCP> csmith@cscs.UUCP (Craig E. Smith) writes:
>In <1990Oct31.001104.22908@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>
>
>>In previous notes, I said lot of things, mainly supporting the value of a
>>traditional view of what is or is not "science".  Malcolm, at Edinburgh,
>>said many things defending the relatively looser (flexible?) view of
>>what is or is not science.  
>
>In the general sense of the word, a "science" is simply any subject
>which can be systematically studied in a logical manner, and the
>related body of knowledge generated by that study. In most cases,
>the decision of whether or not to call a particular study a science
>is primarily a political consideration, and is largely based on whether 
>one accepts that the field can be systematically, and logically 
>investigated.
>

  An interesting observation!  I have been looking for some succinct
statement about the philosophy of science since Gary Fostel brought
it up.  In going over my bookshelf, I found a few remarks by W.V.O.
Quine in _Methods of Logic_.

"Logic, like any other science, has as its business the pursuit of 
truth.  What are true are certain statements; and the pursuit of 
truth is the endevour to sort out the true statements from the others,
which are false....  But scientific activity is not the indiscriminant
amassing of truths; science is selective and seeks the truths that
count for the most, either in point of intrinsic interest or as
instruments for coping with the world." [pg xi]

  He goes on (if I can be trusted to paraphrase :) to describe a 
notion of a system of truths, and the anti-realist position that
such systems are conceptual only and not directly confrontable
with their "subject matter".  He also makes some suggestions on
method - ways of changing the system when its gives wrong predictions.
Would you characterize this as being a description of a "flexible"
notion of science, or a "well-understood" science?  Or is it a
matter of the quantity of truths known, something which Quine 
does not mention?
--
      Cameron Shelley        | "Fidelity, n.  A virtue peculiar to those 
cpshelley@violet.waterloo.edu|  who are about to be betrayed."
    Davis Centre Rm 2136     |  
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

cam@aipna.ed.ac.uk (Chris Malcolm) (11/02/90)

In article <1990Oct31.001104.22908@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

>Apparently I'm either a Humean (close enough to "human" that I like it), 
>or a Baconian, (a bit to porcine) and maybe even a Popperian (is this
>a reference to the Marvel Omniverse Planet?)  Nor was I able to recognize
>the Lakatosian underpinnings of Malcolm's opinions.  (I believe all of this
>was my punishment for suggesting that a good grounding in the Philosophy
>of Science would help people doing AI work.)

You're right! 

>I don't recognize Lakatose,

Imre Lakatos, and since his collected works (he died young) were
sufficiently popular to appear in a paperback edition (1980), I'm
suprised to find anyone professsing familiarity with philosophy of
science who is not aware of his work. 

>but I'll bet a pint (if I ever make it to Edinburgh or vice versa) that he or
>she did their writing and thinking around the time of WW II or just afterward
>when supposed intellectual underpinnings were created to support the rise
>of the "soft" sciences.  

Great! You owe me a pint of Murphy's! The first writings of Lakatos in
our library start with edited proceedings of phil of sci conferences in
1965, and mostly occur in the 1970s (he died young in 1974). He was a
philosopher of science concerned with mathematics and the hard
sciences, and I can't offhand recall a mention of the soft sciences.

Those of us who are fans of Lakatos regard him as the philosopher of
science who is closest to understanding the scientific method. His
analyses of Popper and Kuhn are the best such crits known to me.  Phil
of Sci is not my field, but, like most of the AI researchers I know, I
think Phil of Sci very important. Here at Edinburgh (and at Sussex too),
we think it sufficiently important that we even run courses in it for
our AI students! (Yup, we teach them about Lakatos).

>Malcolm reveals the same chip-on-the-shoulder attitude when he asks about the
>shortcommings of "a view of the scientific method so narrowly dominated
>by the particular practices of a few sciences as to condemn those too
>different to the status of non-sciences?"  Well, first of all, it is not
>restricted to the practises of a few sciences, but there are a few fields of
>enquiry in which the methods happen to be "scientific".  He begs the question
>in his statement of it.  But the real point is that not being a science is
>not "condemning" anything.  It just means it is not a science.  Lawyers are
>not troubled to be in a non-scientific field, nor are doctors, artists, 
>engineers and a great many other respectable people.  What is so bad about
>not being a science? 

Nothing. I'm not being chip-on-shoulder, it wouldn't upset me at all not
to be a scientist, it just so happens that I think that AI is a science.
Why? Because (unlike law or engineering) AI is trying to understand
something, to add to human knowledge of the Universe. The things we
make, and the programs we write, are experiments designed to teach us
something.

>This has gotten too long again, but Malcolm asked me two specific questions
>which I will address, namely do I believe: 
>
>   that AI is _properly_ a part of Computer Science [he continued] then
>   I'd be interested to hear why.  For my part I think considering AI to be
>   a branch of CS is as silly as considering Astronomy to be a branch of
>   Optics.
>
>and 
>
>   that there are matters-of-fact in the
>   Universe to the discovery of which other methods than the scientific are
>   best fitted? 
>
>The first seems to be a variation on the "When did I stop beating my wife"
>classic.  The realtive "size" and "scope" of astronomy and optics as the 
>question is posed is absurd and arguably insulting to astronomers and by
>analogy, to Computer "Scientists".  (Also not a science.)

Well, I already knew that's how you felt about it, but I don't think the
relative size and scope of the disciplines has much to do with the point
of principle involved. You haven't answered my question, you have merely
re-iterated your opinion.

>The second is
>not so crudely baited a hook, and much depends on what a "matter of fact" 
>is, but I'll bite and claim that Mathematics is an example where matters
>of fact are discovered by non-scientific means.  Are there other examples?
>Well, yes ... for example ... AI ....    (Sorry, couldn't resist.)

I'm sure you don't suppose that AI is engaged in discovering logical
tautologies, and I hope you will agree that one of the aims of AI is to
discover what are the architectural principles of mind. That seems to me
to be the kind of knowledge appropriately called scientific. Now you may
very well suggest that some AI researchers are *bad* scientists, in that
their methods are unscientific, and will not result in the kind of
knowledge they claim to seek, but I understood your claim to be not
that, but that AI is not a science. You still haven't explained why.

[And you still owe me a pint of Murphy's!]

Here's some Lakatos references:

AUTHOR: International Colloquium in the Philosophy of Science , 1965 , Bedford
          College, England * Lakatos , Imre  , ed.
TITLE: Criticism and the growth of knowledge / edited by Imre Lakatos, Alan Mu>
IMPRINT: Cambridge University Press 1970

AUTHOR: International Colloquium in the Philosophy of Science , 1965 , Bedford
          College * Lakatos , Imre  , ed.
TITLE: Problems in the philosophy of science / edited by Imre Lakatos [and] Al>
IMPRINT: Amsterdam North-Holland Pub. Co. 1968

AUTHOR: Lakatos , Imre  d. 1974
TITLE: Philosophical papers / Imre Lakatos / edited by John Worrall and Gregor>
IMPRINT: Cambridge Cambridge University Press 1978
(paperback edn 1980)

AUTHOR: Lakatos , Imre  d. 1974
TITLE: Proofs and refutations : the logic of mathematical discovery / Imre Lak>
IMPRINT: Cambridge Cambridge University Press 1976
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

fostel@eos.ncsu.edu (Gary Fostel) (11/02/90)

I like the quote by Quine (posted for our reading pleasure by Shelly,
at Waterloo.  But the very first line of the quote, (emphasis added),
should be a red flag. Again, Quine, pg xi in "Methods of Logic":
     
   Logic, *like*any*other*science*, has as its business the pursuit of 
   truth.  What are true are certain statements; and the pursuit of 
   truth is the endevour to sort out the true statements from the others,
   which are false....  But scientific activity is not the indiscriminant
   amassing of truths; science is selective and seeks the truths that
   count for the most, either in point of intrinsic interest or as
   instruments for coping with the world. 

One of Quine's premises is clearly that science is broadly defined.  Like
Lakotose, he was a post WWII writer.  I have trouble seeing logic as a science.
For example, where are the experiments that add new assertions
to the collected set that are not logically deducible from the existing set?
My own notion of "science" is inextricably linked to experimentation, and 
there is none in logic.  I'd like to replace Quine's "scientific activity"
by "research activity" in his paragraph and then I'd be quite happy with it.

Interestingly, there is never any emergence in logic.  A new assertion does
not somehow emerge by virtue of some sort of critical mass of assertions. 
At least not in logic.  "Human logic" is probably a different matter.

----GaryFostel----                        Department of Computer Science
                                          North Carolina State University

fostel@eos.ncsu.edu (Gary Fostel) (11/02/90)

Jim Rheulin at NCR, San Diego, wrote:

>In <1990Oct30.220248.20784@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary
Fostel) >writes:
>
>>The trouble with this dismissal of the "emergence" of gravity is that 
>>the dismissal is predicated upon a theory that has never been demonstrated.
>>I thought we were discussing observed properties of systems.  Unless I am
>>mistaken, there has never, ever, been a demonstration of the existence of
>>a gravitational field due to a single particle.  Not only is this far beyond
>>the sensitivity of current instrumentation, but in fact depending on the
>>theory of gravitation you like, it might not be true anyway. 
>
>True, but the point may have been that the physicist spoke of gravity
>"emerging" from a complex system of particles (much as some cognitive
>scientists claim "intelligence" emerges from a complex system of
>neurons).  Minsky pointed out that this property exists for just two
>particles.  The assuption is that two particles, like two neruons,
>are not a complex enough system to provide "emergence" (just what is
>complex enough is never addressed by the emergites).  Hence whatever
>the property is, it's not emergent.
>
>At least, that's how I read his argument as you posted it.

OK. But ...

I don't think it matters that much whether we are dealling with one particle
or two.  It could be scads of particles.  No one has ever observed gravity
acting on anything less than jillions of particles.  We all beleive it acts
on each particle but that is unfounded.  I don't think there is even any
indirect evidence of gravity until you get to macroscopic sized objects.
The gravitional force is many orders of magnitude weaker than even the
weakest of the other forces.  I don't often find myself disagreeing with
Minsky, but this time I think he's wrong.  

----GaryFostel----                       Department of Computer Science
                                         North Carolina State University

cpshelley@violet.uwaterloo.ca (cameron shelley) (11/02/90)

In article <1990Nov1.204417.7120@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>
>One of Quine's premises is clearly that science is broadly defined.  Like
>Lakotose, he was a post WWII writer.  I have trouble seeing logic as a science.
>For example, where are the experiments that add new assertions
>to the collected set that are not logically deducible from the existing set?
>My own notion of "science" is inextricably linked to experimentation, and 
>there is none in logic.  I'd like to replace Quine's "scientific activity"
>by "research activity" in his paragraph and then I'd be quite happy with it.
>

You may be comparing logic and (let's say) physics on uneven terms.  The
view of logic you appear to take is that of a timeless structure of
related statements, while physics progresses with time by experimentation.
New results in logic are achieved over time by coming up with 
conjectures (unproven statements), creating a derivation of them from
known results ("create" because there is no algorithm to produce them),
and submitting these for verification by peers.  Logicians' 'experiments'
are obviously more like the Gedankenexperiments of theoretical physics
than the directed operation of engineering tools in an experimental 
physicist's lab, but the function is analogous.  Certainly, there is
a difference in the physical undertaking of verification, but it is an
effect of refering back to the subject matter being investigated, and
not, I think, of some fundamental conceptual gap between the two.

It strikes me that the effort of withholding the term "science" from
things like logic, math, etc... is more a function of preference than
of necessity.

>Interestingly, there is never any emergence in logic.  A new assertion does
>not somehow emerge by virtue of some sort of critical mass of assertions. 
>At least not in logic.  "Human logic" is probably a different matter.
>

I believe your first statement there is debatable.  As I've argued
before, the axioms of a logic system themselves can be considered
'emergent' since their appearance is not predictable from their
composition (trivially, since they have no defined "parts" to be
composed of).  They exist and have meaning only because it seems
they should to people, not because their existance is 'revealed' to
us or because they impinge upon our senses - which is not sufficient
for knowledge anyway.

As to the second statement, I hope noone has proposed that assertions
arise magically.  Emergence in a system exists due to the inescapable
interaction of a subjective observer with system observed.  Both
philosophy and physics have had to come to grips with this during
the current century.  Whether we see information as existing only
at a particular minimum level of desciption says only something about
us (and our model), and is not binding on reality - at least that
is my contention! :>

Human logic?  Hmmm.  The discussions in this newsgroup are as good
a place as any to look at that!

PS.  I *do* realize that what I say is based on my own view of
'knowledge' and 'belief' which you may or may not share.  Since
this is a philosophy group, I have no apologies to make on that
score.  Disagree!  "The play's the thing" after all... :>
--
      Cameron Shelley        | "Fidelity, n.  A virtue peculiar to those 
cpshelley@violet.waterloo.edu|  who are about to be betrayed."
    Davis Centre Rm 2136     |  
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

csmith@cscs.UUCP (Craig E. Smith) (11/02/90)

In <1990Nov1.204417.7120@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

>One of Quine's premises is clearly that science is broadly defined.  Like
>Lakotose, he was a post WWII writer.  I have trouble seeing logic as a science.
>For example, where are the experiments that add new assertions
>to the collected set that are not logically deducible from the existing set?
>My own notion of "science" is inextricably linked to experimentation, and
>there is none in logic.  I'd like to replace Quine's "scientific activity"
>by "research activity" in his paragraph and then I'd be quite happy with it.

I have never heard anyone claim that all new scientific assertions must
not be logically derivable from existing assertions. If a science is
consistent, and well developed, I should think new assertions usually would 
be logically derivable. The experimentation in logic, as in mathematics, 
is in deriving hitherto unknown relationships from known "facts". The 
original basis for logic, and mathematics is real world experience, but 
they have been abstracted to the extent that they are virtually independent 
of the real world, and yet they are still consistent with it. The entire 
structure of logic, as well as mathematics, is based on axioms that 
have been arbitrarily defined by humans, but they are based on human 
observation in the first place. We can observe that one plus one equals 
two, but we accept axiomatically what one, two, plus and equals mean, 
just as the physicist accepts axiomatically the meaning of gravity, even 
though he can't show why it exists. The difference is that assertions
of physics must be consistent with axioms derived from the physical
world, which, as we learn more about the physical world, may change,
whereas logic, and mathematics depend on predefined axioms that 
generally do not change.

>Interestingly, there is never any emergence in logic.  A new assertion does
>not somehow emerge by virtue of some sort of critical mass of assertions. 
>At least not in logic.  "Human logic" is probably a different matter.

It seems to me that the idea of emergence (at least in the way I have
most commonly seen the term used) is a lot like religion, a convenient 
way to explain things that are either too complicated, or about which 
we have too little information to adequately understand.  If you think 
you have a system which is more than the sum of its parts, then probably
you are either overlooking some of the parts, or you are arbitrarily 
defining an axiomatic property which coincides with the properties 
possessed by the system.


-- 
--------------------------------------------------------------------------
  If you want a picture of the future,  | Internet:     csmith@cscs.UUCP 
  imagine a boot stomping on a human    | UUCP:     ... uunet!cscs!csmith 
  face - forever.  - George Orwell      |---------------------------------

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (11/03/90)

In <1990Nov1.205907.7472@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:


>Jim Ruehlin at NCR, San Diego, wrote:
>>True, but the point may have been that the physicist spoke of gravity
>>"emerging" from a complex system of particles (much as some cognitive
>>scientists claim "intelligence" emerges from a complex system of
>>neurons).  Minsky pointed out that this property exists for just two
>>particles.  The assuption is that two particles, like two neruons,
>>are not a complex enough system to provide "emergence" (just what is
>>complex enough is never addressed by the emergites).  Hence whatever
>>the property is, it's not emergent.

>I don't think it matters that much whether we are dealling with one particle
>or two.  It could be scads of particles.  No one has ever observed gravity
>acting on anything less than jillions of particles.  We all beleive it acts
>on each particle but that is unfounded.  I don't think there is even any
>indirect evidence of gravity until you get to macroscopic sized objects.
>The gravitional force is many orders of magnitude weaker than even the
>weakest of the other forces.  I don't often find myself disagreeing with
>Minsky, but this time I think he's wrong.  

I'm no physicist, but I think you're begging the question.  If gravitional
theory applies to two individual particles, then I think my argument
holds.  Otherwise, we can't know if it holds or not because we can't
observe gravity operating on anything less than jillions of particles. So
it's unprovable whether gravity is emergent or not.

I don't know if anyones done it or not yet, but it would be interesting to
see what one could theorize from just two neurons acting together with
no other connections to other neurons (someone must have done this by now).
Can we see the same properties in minutae that we can see with a more
complex neuronal system?  Are our measurements accurate enought to detect
anything?  Is what were measuring defined enough to measure?  The proper
experimental design might start to lead to some evidence/disproof of
emergence.

- Jim Ruehlin

aboulang@bbn.com (Albert Boulanger) (11/04/90)

In article <1990Nov2.103219.24132@cscs.UUCP> csmith@cscs.UUCP (Craig E. Smith) writes:

   It seems to me that the idea of emergence (at least in the way I have
   most commonly seen the term used) is a lot like religion, a convenient 
   way to explain things that are either too complicated, or about which 
   we have too little information to adequately understand.  If you think 
   you have a system which is more than the sum of its parts, then probably
   you are either overlooking some of the parts, or you are arbitrarily 
   defining an axiomatic property which coincides with the properties 
   possessed by the system.


I am amazed by the fact that a view like this can be held! It
indicates an urgent need to inform on recent (last 20 years, let's
say) developments in nonlinear science and mathematics. There are many
fine *analytical* folk in this necessarily experimental field, who are
uncomfortable as anybody else is about not being able to predict the
manifold unexpected behavior of nonlinear systems. There are nonlinear
systems where one can make piecewise linear approximations to the
system and study them from the "bottom-up", but in general
*superposition* does not hold for nonlinear systems. The method is to
first observe the emergent behavior *experimentally* - often using the
computer as a virtual reality - and build the route to its emergence
after the fact. I do not see why this is such a sticky point. I should
also mention that generic motifs are to be found in emergent
properties across many nonlinear systems, and these will probably
become part of a established theory in the decades to come.  Nonlinear
science and mathematics does not halt because of the lack of such a
theory, and the many fine researchers in the field of nonlinear
science do not invoke mysticism, but they must observe first, be
amazed, and *then* explain. This particular and mandatory process of
investigating nonlinear behavior is the setting for the term
"emergence".  Pure mathematics is not untouced by this either:


From "Incompleteness Theorms for Random Reals"
G. J. Chaitin
Advances in Applied Mathematics, 8. 119-146 (1987)

"In conclusion, we have seen that proving whether particular
exponential diophantine equations have finitely or infinitely many
solutions, is absolutely intractable. Such questions escape the power
of mathematical reasoning. This is a region which mathematical truth
has no discernible structure or pattern and appears to be completely
random. These questions are completely beyond the power of human
reasoning. Mathematics can not deal with them.

Quantum physics has shown that there is randomness in nature. I
believe that we have demonstrated in this paper that randomness is
already present in pure mathematics, This does not mean that the
universe and mathematics are lawless, it means that the laws of a
different kind apply; statistical laws."


Experimentally,
Albert Boulanger
aboulanger@bbn.com

csmith@cscs.UUCP (Craig E. Smith) (11/05/90)

In <ABOULANG.90Nov3133702@poincare.bbn.com> aboulang@bbn.com (Albert Boulanger) writes:

>In article <1990Nov2.103219.24132@cscs.UUCP> csmith@cscs.UUCP (Craig E. Smith) writes:

>   It seems to me that the idea of emergence (at least in the way I have
>   most commonly seen the term used) is a lot like religion, a convenient 
>   way to explain things that are either too complicated, or about which 
>   we have too little information to adequately understand.  If you think 
>   you have a system which is more than the sum of its parts, then probably
>   you are either overlooking some of the parts, or you are arbitrarily 
>   defining an axiomatic property which coincides with the properties 
>   possessed by the system.


>I am amazed by the fact that a view like this can be held! It
>indicates an urgent need to inform on recent (last 20 years, let's
>say) developments in nonlinear science and mathematics. There are many
>fine *analytical* folk in this necessarily experimental field, who are
>uncomfortable as anybody else is about not being able to predict the
>manifold unexpected behavior of nonlinear systems. There are nonlinear
>systems where one can make piecewise linear approximations to the
>system and study them from the "bottom-up", but in general
>*superposition* does not hold for nonlinear systems. The method is to
>first observe the emergent behavior *experimentally* - often using the
>computer as a virtual reality - and build the route to its emergence
>after the fact. I do not see why this is such a sticky point. I should
>also mention that generic motifs are to be found in emergent
>properties across many nonlinear systems, and these will probably
>become part of a established theory in the decades to come.  Nonlinear
>science and mathematics does not halt because of the lack of such a
>theory, and the many fine researchers in the field of nonlinear
>science do not invoke mysticism, but they must observe first, be
>amazed, and *then* explain. This particular and mandatory process of
>investigating nonlinear behavior is the setting for the term
>"emergence".  Pure mathematics is not untouced by this either: 
> ...

It is possible that my idea of the definition of emergence is off, because 
I base it only on things I have seen people say here in recent postings. 
If so what is your precise definition of emergence, and emergent 
properties? It seems to me that you are implying exactly what I said, 
that calling something an emergent property is only saying that you don't 
know how it works, or are unable to adequately describe it. Is emergence 
simply a standard term used to describe something not yet understood, or 
is it something more? Just because you cannot understand something doesn't 
mean it is mystic. It only means you are not omniscient. It doesn't make me 
very uncomfortable that their are things which we cannot explain, although 
I would like to have an explanation for everything. As much as we may 
dislike the fact, the human brain is finite, and has a limited capacity for 
understanding, and processing information, even when assisted by a computer. 
Many of the things that we now think of as emergent will later be explained, 
and thus become non-emergent, while others will never be explained, and will 
remain emergent. I don't think this should necessarily have any affect on 
the general advancement of science, and I don't see where anything you have 
said contradicts my statement. 

There seem to be two main cases where emergence is invoked. One is where 
we start with an observation of something physical or logical, where we 
don't have enough information, or the mental capacity to completely 
understand it, and the other is where we start with a fuzzy definition 
of a general property such as intelligence, consciousness, or memory, and 
try to analyze it from specific examples. The first case is only a matter 
of gaining more knowledge or insight about a particular phenomenon or 
structure, but the second is a problem of trying to determine specific 
information about something which is poorly defined in the first place.

>... This does not mean that the
>universe and mathematics are lawless, it means that the laws of a
>different kind apply; statistical laws."

The universe has no laws, it only exhibits behaviors. Scientific laws 
are strictly man made constructions designed to approximate and predict
those behaviors.


-- 
--------------------------------------------------------------------------
  If you want a picture of the future,  | Internet:     csmith@cscs.UUCP 
  imagine a boot stomping on a human    | UUCP:     ... uunet!cscs!csmith 
  face - forever.  - George Orwell      |---------------------------------

G.Joly@cs.ucl.ac.uk (11/06/90)

In <1990Oct31.102704.18335@cscs.UUCP>, csmith@cscs.UUCP (Craig E. Smith) writes

> In the general sense of the word, a "science" is simply any subject
> which can be systematically studied in a logical manner, and the
> related body of knowledge generated by that study. In most cases, the
> decision of whether or not to call a particular study a science is
> primarily a political consideration, and is largely based on whether
> one accepts that the field can be systematically, and logically
> investigated.

In "The Construction of Reality", Arbib and Hesse argue that Science
is a body of facts. They also mention Popper (and others) and his idea
of a "falsifiable" theory being Science. However, they also make it
clear that there are a large number of theories which "fit the bill",
that is to say are not at variance with previous theories, yet could
be superseded in a subsequent paradigm. They they go on to discuss
Marx (and others) who they seem to like, because I suppose they are
better "scientists" (don't quote me on that).

But what of science? "Amateur psychology is folk psychology"; the
notion that Science goes along without being "political" is wrong. The
low funding of basics research is the UK is political position. But
more than that isthe observation that all the questions we ask (as for
example psychologists) are influency by the fact that we are people
ourselves. As an example, a recent piece of research showed a positive
correlation between myopia and IQ. So the folklore image of a
short-sighted bbiffin was reinfinced by "Science". But this
rubberstamp did not address the questions of what is IQ and which was
the cause and which the effect.

G.Joly@cs.ucl.ac.uk (11/06/90)

In <1990Oct31.102704.18335@cscs.UUCP>, csmith@cscs.UUCP (Craig E. Smith) writes

> In the general sense of the word, a "science" is simply any subject
> which can be systematically studied in a logical manner, and the
> related body of knowledge generated by that study. In most cases, the
> decision of whether or not to call a particular study a science is
> primarily a political consideration, and is largely based on whether
> one accepts that the field can be systematically, and logically
> investigated.

In "The Construction of Reality", Arbib and Hesse argue that Science
is a body of facts. They also mention Popper (and others) and his idea
of a "falsifiable" theory being Science. However, they also make it
clear that there are a large number of theories which "fit the bill",
that is to say are not at variance with previous theories, yet could
be superseded in a subsequent paradigm. They they go on to discuss
Marx (and others) who they seem to like, because I suppose they are
better "scientists" (don't quote me on that).

But what of Science? "Amateur psychology is folk psychology!" The
notion that Science goes along without being "political" is false. The
low funding of basic research is the UK (as compared with France and
Germany) is a political position. But more than that is the
observation that all the questions we ask (as for example
psychologists, social scientists, AI researchers) are influenced by
the fact that we are people ourselves (both observers and subjects).
As an example, a recent piece of research showed a positive
correlation between myopia and high IQ. So the folklore image of a
short-sighted boffin was reinforced by "Science".  This rubberstamping
did not address the questions of what is IQ, is IQ inherited through
the genes and thereby linked to a predisposition to both eyeseight and
intelligence and which was the cause & which the effect.

Gordon Joly                                           +44 71 387 7050 ext 3716
InterNet: G.Joly@cs.ucl.ac.uk       UUCP: ...!{uunet.uu.net,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT, UK

Ref:   The construction of reality / Michael A. Arbib and Mary B. Hesse.
       Cambridge : Cambridge University Press, 1986. - xii,286p. - (Gifford
       lectures ; 1983) Includes index. - 0-521-32689-3

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (11/06/90)

In article <1990Nov1.205907.7472@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

>>>The trouble with this dismissal of the "emergence" of gravity is that 
>>>the dismissal is predicated upon a theory that has never been demonstrated.
>I don't think it matters that much whether we are dealling with one particle
>or two.  It could be scads of particles.  No one has ever observed gravity
>acting on anything less than jillions of particles.  We all beleive it acts
>on each particle but that is unfounded.  I don't think there is even any
>indirect evidence of gravity until you get to macroscopic sized objects.
>The gravitional force is many orders of magnitude weaker than even the
>weakest of the other forces.  I don't often find myself disagreeing with
>Minsky, but this time I think he's wrong.  

Wrong about what?  You're right, so far as I know, that gravity has
not been directly measured for pairs of small particles.  But we were
speaking about "emergence".  Because all known experiments confirm
superposition for gravity, we can "explain" all such experiments in
terms of pair-interactions.  So there is no need to postulate that new
phenomena emerge in large ensembles.

Perhaps this discussion is confused because some discussants, like me,
have used the word "emergent" for phenomena that (1)  emerge from large
ensembles but  cannot (or, rather, have not) (2) been explained in
terms of rules for combining the effects observed in small emsembles.
This is different from explaining things in general, such as the
electric force between two charged particles.  This cannot (or,
rather, has not, except, perhaps, by Dirac) be explained in terms of
phenomena observed by single particles.  But for this sort of thing,
we use the term "interaction" rather than emergent.  It isn't a matter
of "right" or "wrong"", but of how we agree to use those words.

I think Gene Roddenberry once quoted Leonard Nimoy as saying, "I
have my faults, but being wrong is not among them".  

holliday@csgrad.cs.vt.edu (Glenn Holliday) (11/06/90)

	Mike Oliphant <oliphant.4676@telepro.UUCP> said something I
like in another thread:

>I want to know why I have such a "point of view"
>and where it comes from.  Telling me that it is subjective and I cannot
>objectively investigate it doesn't help any.  This is the traditional cop-out

	The quest to understand emergant phenomena seems to be an approach
to a question that may be undecidable:  is consciousness/intelligence
computable?  Building software models of the way we think creates a
computational simulation of something that might or might not actually work
by a computing (or computing-ish?) mechanism.
	Clearly, this works some of the times.  There are subsystems of the
brain that do work that is very much like computation.  We can build
computations that produce interesting behaviors which appear pretty similar
to some behaviors of intelligent, conscious beings.  But ultimately, the
models we build are approximations of the intellligence we study.
	Please note -- I'm not arguing either that "It's spiritual so it's
impossible" or "It's mechanistic so it's just a matter of time."  I'm
arguing that we really don't have enough information yet to know whether
the intelligent behaviors we want to emerge from AI can be modelled
computationally.  A couple of empirical observations on both sides of the
question.  People with specific expertise in these areas, I'd love to hear
what you have to say.

1. Natural Language:  I thought at one time that since natural language
   cannot be modelled by any grammar of any complexity, it was a good
   argument that some of our thought processes are non-computational.  I
   now believe that the ways we fill in the corners, and look up exceptions
   in our language memory can also be computationally modelled.  This
   example argues that, as we have larger communities of specialist experts
   processing cooperatively, we can expect interesting intelligent
   behaviors to emerge.

2. Emotional life:  Our firmware, hormoneware and thought processes are at
   their most entangled when emotional experiences emerge.  I have great
   difficulty imagining how computational processes are going to give rise
   to the actual experience of emotion.

Glenn Holliday  holliday@csgrad.cs.vt.edu OR ghollid@access.nswc.navy.mil

csmith@cscs.UUCP (Craig E. Smith) (11/06/90)

In <692@creatures.cs.vt.edu> holliday@csgrad.cs.vt.edu (Glenn Holliday) writes:


>	Clearly, this works some of the times.  There are subsystems of the
>brain that do work that is very much like computation.  We can build
>computations that produce interesting behaviors which appear pretty similar
>to some behaviors of intelligent, conscious beings.  But ultimately, the
>models we build are approximations of the intellligence we study.
>	Please note -- I'm not arguing either that "It's spiritual so it's
>impossible" or "It's mechanistic so it's just a matter of time."  I'm
>arguing that we really don't have enough information yet to know whether
>the intelligent behaviors we want to emerge from AI can be modelled
>computationally.  A couple of empirical observations on both sides of the
>question.  People with specific expertise in these areas, I'd love to hear
>what you have to say.

I take the mechanistic view, but I am not sure it is only a matter of
time, because I am not sure just how complex the human brain actually
is, and what the limits of understanding are within the brain. There
is some possibility that the human brain is too complex to be fully
understood, and so could never accurately be modeled. 

By my definition, everything in the brain is computational, and the 
brain is a computer in the broad sense of the word, since its primary 
function is to compute, which is to store and process information. 

>2. Emotional life:  Our firmware, hormoneware and thought processes are at
>   their most entangled when emotional experiences emerge.  I have great
>   difficulty imagining how computational processes are going to give rise
>   to the actual experience of emotion.

I have no evidence, but I suspect that when we find out more about how
the brain works, we will find that the emotional systems are actually
fairly simple, especially when compared with areas of the brain that
allow for complicated perceptual information processing, and reasoning
abilities. Emotional responses are basic reactions that evolved in
animals to allow for certain requirements of existence. For instance, 
anger is a state brought about in order to prepare the organism for 
danger, by releasing adrenaline, increasing the heart rate, etc. in
preparation for a fight or a fast retreat. Love is a response designed
to attract mates to each other in order to perpetuate the species. The
effects of emotions on our mental and physical state is extensive, but
I suspect the underlying mechanism is actually relatively simple. I am
not sure we necessarily want to build emotions into machines, at least
not the same ones that people have, since any machine we build will
likely not have precisely the same needs that a person has. I doubt
that we will build robots that need a conventional sex drive, or 
conventional hunger. We might want them to hunger for electricity
to recharge their batteries when they run down, but we probably want
a robot to be a little more stable, and reliable than an emotional
person, at least for practical use. It might be an interesting experiment 
to create these responses in a robot, but without the correct hardware, 
and the need for reaction that goes with the response, the emotion would 
be meaningless.  You might make your desktop PC angry, but all it could 
do would be to curse at you, or just shut down, and this wouldn't really 
be very helpful.


-- 
--------------------------------------------------------------------------
  If you want a picture of the future,  | Internet:     csmith@cscs.UUCP 
  imagine a boot stomping on a human    | UUCP:     ... uunet!cscs!csmith 
  face - forever.  - George Orwell      |---------------------------------

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/07/90)

holliday@csgrad.cs.vt.edu (Glenn Holliday) writes:


>2. Emotional life:  Our firmware, hormoneware and thought processes are at
>   their most entangled when emotional experiences emerge.  I have great
>   difficulty imagining how computational processes are going to give rise
>   to the actual experience of emotion.

   I don't think it's as horribly complex as you say.  It is extremely
*specific* in its underlying structure, but that may come with time.  I think
the main reason that we are so easily confused by our emotions is a lack
of direct connection to our "modeling centers".  We produce inner models
of everything we work with, even ourselves, and so produce an artificial
model of our own emotional states.  This creates a feedback loop (since
we can surely modify our own emotions at times) that tries to push them
toward our *expectations*, and you can guess what happens then...
*chaos*, heh heh.  (seriously, though, I *do* think it causes confusion)

   Erich

     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
           "I haven't lost my mind; I know exactly where I left it."

fostel@eos.ncsu.edu (Gary Fostel) (11/08/90)

Malcom said (in reference to my lack of knowledge of Lakatos):

   Imre Lakatos, and since his collected works (he died young) were
   sufficiently popular to appear in a paperback edition (1980), I'm
   suprised to find anyone professsing familiarity with philosophy of
   science who is not aware of his work. 

Well, I will continue to profess familiarity with the philosphy of
science ... as to particulars, it would seem that Lakatos did not write
what he wrote until after I had studied philsophy so perhaps that is
why I was not exposed to him.  Given the tenor of the thinking of the
people who I studied philsophy under, I suspect they still do not pay
a lot of heed to his writings.  I recently went to a lecture by Chomsky
(one of those guy who failed to teach me about Lakatos :-) and he does
not seem to have adapted a new view of "science".  AI is not one, if
you listen to him, or to be precise, it is not a "natural science".

I think what happenned after WW II is that "science" was redefined 
and Lakatos (and many others for sure) therefor stepped in to provide
a new philosophy of what-they-chose-to-call-science.  Malcom also added:
 
   Here at Edinburgh (and at Sussex too),
   we think it sufficiently important that we even run courses in it for
   our AI students! (Yup, we teach them about Lakatos).

Like I said before ... it shows. (:-)

-   -   -   -   -   -   -   -   

Some time back Malcolm asked me if

   AI is _properly_ a part of Computer Science [he continued] then
   I'd be interested to hear why.  For my part I think considering AI to be
   a branch of CS is as silly as considering Astronomy to be a branch of
   Optics.

To which I answered that as posed the question seemed:

   to be a variation on the "When did I stop beating my wife"
   classic.  The realative "size" and "scope" of astronomy and optics as the 
   question is posed is absurd and arguably insulting to astronomers and by
   analogy, to Computer "Scientists".  (Also not a science.)

It seemed the right answer given the way the question was asked.  Not
surprisingly, he was disappointed with my answer and retorted:

   Well, I already knew that's how you felt about it, but I don't think the
   relative size and scope of the disciplines has much to do with the point
   of principle involved. You haven't answered my question, you have merely
   re-iterated your opinion.

I'm likely to once again "merely" give my opinion.  I have a somewhat higher
opinion of my opinion tho'.  The relative size and scope are indeed a red
herring, but it was not I who raised it -- it was implicit in the original
loaded question.   My answer to valid question lurking here, posed
in a non-pejoritive manner as : What is the relationship between AI and 
Computer Science, is that there is clearly a non-zero intersection, there
is clearly material in each that does not belong in the other, and anything
more than that requires me to try to define two things that are not at all
well defined.  At risk of "merely giving my opinion" I think CS is the more
fundamental and AI the more applied, but much of AI is not applied CS but
application of other fields, such as logic, linguistics, perception, etc, that
are clearly not part of the foundation of CS.  Depending on ones definition
of CS and AI, this may seem reasonable or ludicrous. I am using my own defs.
(And no, I am not going to try to provide them on this newsgroup.)

I'll accept that I lost the bet on Lakatos ... I was guessing he was a 1950's
or early 60's guy at latest and it seems I missed by a deacade.  I'll buy! (But
I hope the selected variety is not too much like Guiness ...) Wish we
could meet and conduct this agument with the needed lubrication to hold
down the friction!


----GaryFostel----                           Department of Computer Science
                                             North Carolina State University

aboulang@bbn.com (Albert Boulanger) (11/11/90)

In article <1990Nov5.021135.3749@cscs.UUCP> csmith@cscs.UUCP (Craig E. Smith) writes:

   It is possible that my idea of the definition of emergence is off, because 
   I base it only on things I have seen people say here in recent postings. 
   If so what is your precise definition of emergence, and emergent 
   properties? It seems to me that you are implying exactly what I said, 
   that calling something an emergent property is only saying that you don't 
   know how it works, or are unable to adequately describe it. Is emergence 
   simply a standard term used to describe something not yet understood, or 
   is it something more? Just because you cannot understand something doesn't 
   mean it is mystic. It only means you are not omniscient. It doesn't make me 
   very uncomfortable that their are things which we cannot explain, although 
   I would like to have an explanation for everything. As much as we may 
   dislike the fact, the human brain is finite, and has a limited capacity for 
   understanding, and processing information, even when assisted by a computer. 
   Many of the things that we now think of as emergent will later be explained, 
   and thus become non-emergent, while others will never be explained, and will 
   remain emergent. I don't think this should necessarily have any affect on 
   the general advancement of science, and I don't see where anything you have 
   said contradicts my statement. 


To recap what I have been presenting in prior messages, I have claimed
that the notion of emergence is best understood in the context of
nonlinear dynamical systems. Emergence is fundamentally due to the
fact that nonlinear systems do not obey the principle of superposition
that is in the core of the analysis of linear systems. This is the
basis of understanding emergence. In this note, I will mention some
other properties of nonlinear dynamical systems that will sharpen our
view of what emergence is and hopefully show that it is not just "a
standard term used to describe something not yet understood" - yet in
a way it is just that. (The ole' objectivist debate ;-)) This is
fundamentally due to a kind of uncertainty principle arizing from the
sensitive dependence of initial conditions in chaotic dynamical
systems. In fact, it may explain a long standing problem in physics -
the "emergence" of an arrow to time from a microscopic universe of
reversible laws. Since we are also looking at many-body systems, we
will also have to throw in some themodynamics.

One way to measure emergence of structure is to take on an
information-theoretic approach and look at the "entropy" of a system.
(One has to be careful in the formulation of entropy under these
conditions) If the measure of information increases, then we have new
structure being formed. First of all this implies that a many-body
system under investigation is not in thermodynamic equilibrium --
"Self organization under far-from-equilibrium conditions". The tools
for analyzing such nonequilibrium systems is also relatively recent
and goes hand in hand with results from nonlinear dynamics -- the
joint area is known as ergodic theory. The picture one should have
then is a many-body system with nonlinear interactions in a heat-bath
(a source for energy and some place to discharge heat). One reference on
nonequilibrium thermodynamics is:

"From Being to Becomming: Time and Complexity in the Physical Sciences"
Ilya Progigine
W.H. Freeman & Co., 1980

One necessary but not sufficient sign of chaos in a nonlinear
dynamical system is the existance of sensitive dependence on initial
conditions. What happens is that nearby initial trajectories undergo
an exponential expansion. Any uncertainty in initial conditions are
amplified exponentially. Another necessary condition for chaos is a
measure of the rate of expansion of the axis of the hyperelipse formed
by a small initial point set. If one of the so-called Lyapounov
exponents is > 0, (a log is taken -- also one sees many spellings for
Lyapounov) then you have a necessary condition for chaos.

Consider what happens when you put such a chaotic non-linear many-body
system in a heat-bath. What can happen is that the system will pick up
small pertubations on the heat bath at the microscopic level and
amplify them to the level of "emergence" of structure on the
marcoscopic level. We can not in *principle* know what these
pertubations are! This approach has been written up in several papers:


"Chaos, Entropy, and the Arrow of Time"
Peter Coveney
New Scientist, 29 September 1990, 49-52
This one has minimal math.

"The Second Law of Thermodynamics: Entropy, Irreversibility, and Dynamics"
Peter Coveney
Nature, Vol 333, 2 June 1988, 409-415

"Stange Attractors, Chaotic Behavior, and Information Flow"
Robert Shaw
Z. Naturforsch, 36a, 1980, 80-112

This in my view, is the mechanism for the the evolution of life from
an non-life protoform. I also think it is fundamental reason of why
"open" computational systems can be so powerful -- people in the open
systems game need to take a dynamical systems view of what they are
doing.

I normally botch up my presentations by being too brief so if anybody
needs more unpacking, give a holler.


Regards,
Albert Boulanger
aboulanger@bbn.com