[comp.ai.digest] metaepistemology and unknowability

YLIKOSKI@FINFUN.BITNET (07/24/88)

Date: Thu, 21 Jul 88 15:19 EDT
From: YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU
Subject:  metaepistemology and unknowability
To: AILIST@AI.AI.MIT.EDU
X-Original-To:  @AILIST, YLIKOSKI

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In AIList Digest   V8 #9, David Chess <CHESS@ibm.com> writes:

>Can anyone complete the sentence "The actual world is unknowable to
>us, because we have only descriptions/representations of it, and not..."?

I may have misused the word "unknowable".  I'm applying a mechanistic
model of human thinking: it is an electrochemical process, neuron
activation patterns representing objects which one thinks of.  The
heart of the matter is if you can say a person or a robot *knows*
something if all it has is a representation, which may be right or
wrong, and there is no way for it to get absolute knowledge.  Well,
the philosophy of science has a lot to say about describing the
reality with a theory or a model.

Note that there are two kinds of models here.  The human brain
utilizes electrochemical, intracranial models without us being aware
of it; the philosophy of science involves written theories and
models which are easy to examine, manipulate and communicate.

I would say that the actual world is unknowable to us because we have
only descriptions of it, and not any kind of absolutely correct,
totally reliable information involving it.

>(I would tend to claim that "knowing" is just (roughly) "having
> the right kind of descriptions/representations of", and that
> there's no genuine "unknowability" here; but that's another
> debate...)

The unknowability here is uncertainty about the actual state of the
world very much in the same sense as scientific theories are theories,
not pure, absolute truths.


Andy Ylikoski

steve@comp.vuw.ac.nz (Steve Cassidy) (07/26/88)

To: uunet!comp-ai-digest@uunet.UU.NET
Path: vuwcomp!steve
From: Steve Cassidy <steve@comp.vuw.ac.nz>
Newsgroups: comp.ai.digest
Subject: Re: metaepistemology and unknowability
Summary: What do these definitions tell us?
Date: Mon, 25 Jul 88 00:26 EDT
References: <19880724060148.1.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
Reply-To: Steve Cassidy <steve@comp.vuw.ac.nz>
Organization: Comp Sci, Victoria Univ, Wellington, New Zealand
Lines: 49




In a previous article  YLIKOSKI@FINFUN.BITNET (Andy Ylikoski) writes:

>I would say that the actual world is unknowable to us because we have
>only descriptions of it, and not any kind of absolutely correct,
>totally reliable information involving it.

This seems like a totally useless definition of Knowing; what have you
gained by saying that I do not *know* about chairs because I only have
representations of them.  This seems to be a problem people have in
making definitions of concepts in cognition. 

Dan Dennet tries, in Brainstorms, to provide a useful definition of
something like what we mean by "intelligence". To avoid the problems of
emotional attatchment to words he uses the less emotive "intentionality". He
develops a definition that could be useful in deciding how to make  systems
act like intelligent actors by restricting that definition to accurate
concepts.  (As yet I don't claim to understand what he means, but I think I
get his drift.)  

Now, we can argue whether Dennet's 'intentionality' corresponds to
'intelligence' if we like, but what will it gain us? It depends on what your
goals as an AI researcher are. I'm interested in building models of cognitive
processes - in particular, reading - my premise in doing this is that
cognitive processes can be modelled computationally, and that by building
computational models we can learn some more about the real processes. I am not
interested in whether, at the end of the day I have an intelligent system, a
simulation of an intelligent system or just a dumb computer program. I will
judge my performance on results -- does it behave in a similar way to humans,
if so my model, and the theory it is based upon, is good. 

Is there anyone out there who's work will be judged good or bad depending on
whether it can be ascribed `intelligence'?  It seems to me that it is only
useful to make definitions to some end, rather than for the sake of making
definitions; we are, after all, Applied Epistomologists and not 
Philosophers (:-)


Steve Cassidy				    domain: steve@comp.vuw.ac.nz|
Victoria University, PO Box 600,   -------------------------------------|
Wellington, New Zealand	             path: ...seismo!uunet!vuwcomp!steve|

"If God had meant us to be perfect, He would have made us that way"
					     - Winston Niles Roomford III