[comp.ai.digest] metaepistemology

YLIKOSKI@FINFUN.BITNET (06/26/88)

Date: Fri, 24 Jun 88 12:46 EDT
From: YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU
Subject:  metaepistemology
To: AILIST@AI.AI.MIT.EDU
X-Original-To:  @AILIST,@JMC, YLIKOSKI

Distribution-File:
        AILIST@AI.AI.MIT.EDU
        JMC@SAIL.Stanford.EDU

In AIList Digest   V7 #41, John McCarthy <JMC@SAIL.Stanford.EDU>
writes:

>I want to defend the extreme point of view that it is both
>meaningful and possible that the basic structure of the
>world is unknowable.  It is also possible that it is
>knowable.


Suppose an agent which wants to know what there is there.

Let the agent have methods and data like a Zetalisp flavor.

Let it have sensors with which it can observe its environment and
methods to influence its environment like servo motors running robot
hands.


Now what can it know?


It is obvious the agent only can have a representation of the Ding an
Sich.  In this sense the reality is unknowable.  We only have
descriptions of the actual world.

There can be successively better approximations of truth.  It is
important to be able to improve the descriptions, compare them and to
be able to discard ones which do not appear to rescribe the reality.

It also helps if the agent itself knows it has descriptions and that
they are mere descriptions.


It also is important to be able to do inferences based on the
descriptions, for example to design an experiment to test a new theory
and compare the predicted outcome with the one which actually takes
place.


It seems that for the most part evolution has been responsible for
developing life-forms which have good descriptions of the Ding an Sich
and which have a good capability to do inference with their models.
Humans are the top of this evolutionary development: we are capable of
forming, processing and communicating complicated symbolic models of
the reality.


                        Andy Ylikoski

csrobe@ICASE.ARPA (Charles S. Roberson) (06/29/88)

Return-path: <@AI.AI.MIT.EDU:csrobe@icase.arpa>
Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 167793; 27 Jun 88 09:17:29 EDT
Received: from icase.arpa (TCP 20031413463) by AI.AI.MIT.EDU 27 Jun 88 09:18:43 EDT
Received: from work18.icase by icase.arpa (3.2/SMI-3.0DEV3)
	id AA05651; Mon, 27 Jun 88 09:18:16 EDT
Received: by work18.icase (3.2/SMI-3.0DEV3)
	id AA06857; Mon, 27 Jun 88 09:18:11 EDT
Date: Mon, 27 Jun 88 09:18:11 EDT
From: csrobe@icase.arpa (Charles S. Roberson)
Message-Id: <8806271318.AA06857@work18.icase>
To: ailist@ai.ai.mit.edu, ylikoski@finfun.bitnet
Subject: Re: metaepistemology

Assume the "basic structure of the world is unknowable" 
[JMC@SAIL.Stanford.edu] and that we can only PERCEIVE our
world, NOT KNOW that what we perceive is ACTUALLY how the
world is.

Now imagine that I have created an agent that interacts
with *our* world and which builds models of the world
as it PERCEIVES it (via sensors, nerves, or whatever).

My question is this:  Where does this agent stand, in
relation to me, in its perception of reality?  Does it
share the same level of perception that I 'enjoy' or is
it 'doomed' to be one level removed from my world (i.e.
is its perception inextricably linked to my perception
of the world, since I built it)?

Assume now, that the agent is so doomed.  Therefore, it
may perceive things that are inconsistent with the world
(though we may never know it) but are consistent with
*my* perception of the world.

Does this imply that "true intelligence" is possible
if and only if an agent's perception is not nested
in the perception of its creator?  I don't think so.
If it is true that we cannot know the "basic structure of
the world" then our actions are based solely on our
perceptions and are independent of the reality of the
world.

I believe we all accept perception as a vital part of an
intelligent entity.  (Please correct me if I am wrong.)
However, a flawed perception does not make the entity any
less intelligent (does it?).  What does this say about
the role of perception to intelligence?  It has to be
there but it doesn't have to function free of original
bias?

Perhaps, we have just created an agent that perceives
freely but it can only perceive a sub-world that I
defined based on my perceptions.  Could it ever be
possible to create an agent that perceives freely and
that does not live in a sub-world?

-chip
+-------------------------------------------------------------------------+
|Charles S. Roberson          ARPANET:  csrobe@icase.arpa                 |
|ICASE, MS 132C               BITNET:   $csrobe@wmmvs.bitnet              |
|NASA Langley Rsch. Ctr.      UUCP:     ...!uunet!pyrdc!gmu90x!wmcs!csrobe|
|Hampton, VA  23665-5225      Phone:    (804) 865-4090
+-------------------------------------------------------------------------+

bill@proxftl.UUCP (T. William Wells) (07/03/88)

To: novavax!uflorida!comp-ai-digest
Path: proxftl!bill
From: T. William Wells <proxftl!bill@bikini.cis.ufl.edu>
Newsgroups: comp.ai.digest
Subject: Re: metaepistemology
Summary: rehashed Kant
Date: Sat, 2 Jul 88 15:11 EDT
References: <19880625192541.0.NICK@INTERLAKEN.LCS.MIT.EDU>
Organization: Proximity Technology, Ft. Lauderdale
Lines: 16


In a previous article, YLIKOSKI@FINFUN.BITNET writes:
> In AIList Digest   V7 #41, John McCarthy <JMC@SAIL.Stanford.EDU>
> writes:
>
> >I want to defend the extreme point of view that it is both
> >meaningful and possible that the basic structure of the
> >world is unknowable.  It is also possible that it is
> >knowable.

I did not see the origins of this debate but it appears to be
nothing more than an attempt to defend the Kantian noumenal vs.
phenomenal distinction. Instead of wasting time debating this
issue, why don't those of you who are interested go and study
some philosophy? And, for those of you who are going to say "but
I have", carefully compare this view with Kant and you will see
that they are in essence identical.