[comp.ai.digest] questions and answers about meta-epistemology

YLIKOSKI@FINFUN.BITNET (06/29/88)

Return-path: <@AI.AI.MIT.EDU,@MITVMA.MIT.EDU:YLIKOSKI@FINFUN.BITNET>
Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 167896; 27 Jun 88 13:58:44 EDT
Received: from MITVMA.MIT.EDU (TCP 2227000003) by AI.AI.MIT.EDU 27 Jun 88 14:00:00 EDT
Received: from MITVMA.MIT.EDU by MITVMA.MIT.EDU (IBM VM SMTP R1.1) with BSMTP id 3435; Mon, 27 Jun 88 13:58:10 EDT
Received: from FINFUN.BITNET (YLIKOSKI) by MITVMA.MIT.EDU (Mailer X1.25) with
 BSMTP id 3434; Mon, 27 Jun 88 13:57:37 EDT
Date:     Mon, 27 Jun 88 20:57 O
From:     <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject:  questions and answers about meta-epistemology
To:       AILIST@AI.AI.MIT.EDU
X-Original-To:  @AILIST, YLIKOSKI

Distribution-File:
        AILIST@AI.AI.MIT.EDU

Here come questions by csrobe@icase.arpa (Charles S. Roberson) and my
answers to them:

>Assume the "basic structure of the world is unknowable"
>[JMC@SAIL.Stanford.edu] and that we can only PERCEIVE our
>world, NOT KNOW that what we perceive is ACTUALLY how the
>world is.
>
>Now imagine that I have created an agent that interacts
>with *our* world and which builds models of the world
>as it PERCEIVES it (via sensors, nerves, or whatever).
>
>My question is this:  Where does this agent stand, in
>relation to me, in its perception of reality?  Does it
>share the same level of perception that I 'enjoy' or is
>it 'doomed' to be one level removed from my world (i.e.
>is its perception inextricably linked to my perception
>of the world, since I built it)?

It has the perceptory and inferencing capabilities you designed and
implemented, unless you gave it some kind of self-rebuilding or
self-improving capability.  Thus its perception is linked to your
world.

>Does this imply that "true intelligence" is possible
>if and only if an agent's perception is not nested
>in the perception of its creator?  I don't think so.

I also don't think so. The limitation of the perception of the robot
being linked to the designer of the robot is unessential I think.

>I believe we all accept perception as a vital part of an
>intelligent entity.  (Please correct me if I am wrong.)

Perception is extremely essential.  All observation of reality takes
place by means of perception.

>However, a flawed perception does not make the entity any
>less intelligent (does it?).  What does this say about
>the role of perception to intelligence?  It has to be
>there but it doesn't have to function free of original
>bias?

A flawed perception can be lethal for example to an animal.
Perception is a necessary requirement.

It can be argued, though, that all human perception is biased (our
education influeces how we interpret that which we perceive).

>Perhaps, we have just created an agent that perceives
>freely but it can only perceive a sub-world that I
>defined based on my perceptions.  Could it ever be
>possible to create an agent that perceives freely and
>that does not live in a sub-world?

Yes, at least if the agent has the capability to extend itself for
example by being able to redesign and rebuild itself.  How much
computational power in the Turing machine sense this capability
requires is an interesting theoretical question which may already have
been studied by the theoreticians out there.

                        Andy Ylikoski