[net.ai] A topic for discussion, phil/ai pers

robert@hpfclq.UUCP (05/09/84)

I don't see much difference between preception over time and preception
at all.  Example: given a program understands what a chair is, you give
the program a chair it has never seen before.  It can answer yes or no
whether the object is a chair.  It might be wrong.  Now we give the
program designed to recognize people examples of an Abraham Lincoln
at different ages  (with time).  We present a picture of Abraham
Lincoln that the program has never seen before and ask is this
Abe.  The program might again answer incorectly but from a global
aspect the problem is the same.  Objects with time are just classes
of objects.  Not that the problem is not difficult as you have said,
I just think it is all the same difficult problem.

I hope I understood your problem.  Trying hard,
					Robert (animal) Heckendorn
					..!hplabs!hpfcla!robert

marcel@uiucdcs.UUCP (05/16/84)

#R:wxlvax:-27700:uiucdcs:32300026:000:2437
uiucdcs!marcel    May 16 12:57:00 1984

The problem is one of identification. When we see one object matching a
description of another object we know about, we often assume that the object
we're seeing IS the object we know about -- especially when we expect the
description to be definite [1]. This is known as Leibniz's law of the
indiscernability of identicals. That's found its way into the definitions
of set theory [2]: two entities are "equal" iff every property of one is also
a property of the other. Wittgenstein [3] objected that this did not allow for
replication, ie the fact that we can distinguish two indistinguishable objects
when they are placed next to each other (identity "solo numero"). So, if we
don't like to make assumptions, either no two objects are ever the same object,
or else we have to follow Aristotle and say that every object has some property
setting it apart from all others. That's known as Essentialism, and is hotly
disputed [4]. The choices until now have been: breakdown of identification,
essentialism, or assumption. The latter is the most functional, but not nice
if you're after epistemic certainty.
	Still, I see no insurmountable problems with making computers do the
same as ourselves: assume identity until given evidence to the contrary. That
we can't convince ourselves of that method's epistemic soundness does nothing
to its effectiveness. All one needs is a formal logic or set theory (open
sentences, such as predicates, are descriptions) with a definite description
operator [2,5]. Of course, that makes the logic non-monotonic, since a definite
description becomes meaningless when two objects match it. In other words, a
closed-world assumption is also involved, and the theory must go beyond first-
order logic. That's a technical problem, not necessarily an unsolvable one [6].


[1] see the chapter on SCHOLAR in Bobrow's "Representation and Understanding";
    note the "uniqueness assumption".
[2] Introduced by Whitehead & Russell in their "Principia Mathematica".
[3] Wittgenstein's "Tractatus".
[4] WVO Quine, "From a logical point of view".
[5] WVO Quine, "Mathematical Logic".
[6] Doyle's Truth Maintenance System (Artif. Intel. 12) attacks the non-
    monotonicity problem fairly well, though without a sound theoretical
    basis. See also McDermott's attempt at formalization (Artif. Intel. 13
    and JACM 29 (Jan '82)).

					Marcel Schoppers
					U of Illinois at Urbana-Champaign
					uiucdcs!marcel

dinitz@uicsl.UUCP (05/17/84)

#R:wxlvax:-27700:uicsl:15500037:000:1710
uicsl!dinitz    May 17 13:53:00 1984

When I read the following paragraph, I groaned "Oh no, not this topic
again!"

   It seems that it is IMPOSSIBLE to ever build a computer that can truly
   perceive as a human being does, unless we radically change our ideas 
   about how perception is carried out.

It's useless to argue whether computers will ever percieve AS HUMANS
DO.  But I read further and found more substance worth chewing on in
the next paragraph.

   The reason for this is that we humans have very little difficulty
   identifying objects as the same across time, even when all the features of
   that object change (including temporal and spatial ones).  Computers,
   on the other hand, are being built to identify objects by feature-sets. But
   no set of features is ever enough to assure cross-time identification of
   objects.  

Although the responses so far have addressed the philosophical questions of
sameness and continuity, I understood the problem to be one of perception.
I move that the discussion be carried out on both levels.

The perception level problems can be summarized as follows.  Regardless of
the the metaphysical basis one adopts, the human brain (and its adjunct
perceptual subsystems) process sensory data to give the impression that
discrete objects exist through time.  Furthermore, the brain tracks these
objects through succesive (though not necessarily adjacent) images of the
environment -- clearly in response to an evolutionary imperative.  What are
the mechanisms that allow this tracking to take place?  How are they similar
to or different from the methods we might use on computers to achieve such
an effect?

The English language may bias our answers to these questions, so be careful.