[comp.ai.philosophy] the role of reason

ld231782@longs.LANCE.ColoState.EDU (Lawrence Detweiler) (10/14/90)

loren@tristan.llnl.gov (Loren Petrich) in <69347@lll-winken.LLNL.GOV>:

>I wonder how much of our reasoning
>works by what might best be called "fuzzy logic" -- a logic in which
>predicates can not only have the values "true" or "false", but any
>value in between.

None of it, in my view, in the sense that probability will always be
only an approximation of reality.  Why use a model when you can have the
real thing?  In other words, instead of simulation, let's seek duplication.

What do we need to duplicate?  I maintain that thought is an emergent
property of a dynamic, chaotic system.  When you see it in the sea you
call it waves.  Where is the comforting order of "knowledge", "logic",
"reason", and "procedure" (etc. ad infinitum ad nauseum) in the
tumultuous stew of activity ripples in intertwined neurons?  Literally,
in our imagination.  These words are our feeble attempts to characterize
a dynamic system with static ideas.  We need new concepts, descriptions,
and models not for "fuzzyness" but for slippery chaos.  Neural networks
are more than simply a "good bet"; researchers tend to agree that the
biological versions of intelligence are not the best successful
implementations to date--they are the ONLY ones!  Hence, some bias
toward any approach that mimics our raison d'etre is not mere vanity!

When I think of thinking, I think nothing of simplicity.  I think of the
brain, and then its 10^10 separate cells, each of which is directly
connected to 10^4 neighbors and indirectly to everyone in the system!  I
think how each one functions both independently and dependently at the
speed of flying molecules.  I think it would be rather ungrateful and
disloyal to the biology that carries my consciousness if I were to
characterize it as some ultimately simple system of "memories" and
"procedures" organized the way I find most pleasant and confirmatory of
my own arbitrary preconceptions.  It is as if I were to describe my best
friend as a computer!  Just because someone is my best friend, am I
qualified to categorize him?  Let him describe himself!

smoliar@vaxa.isi.edu (Stephen Smoliar) in <15238@venera.isi.edu>:

>To try to reduce the matter to the brink of
>over-simplification, thinking is what we do with our minds as we interact
>with the world around us.  It is not necessarily rational according to many
> (most?) existing standards of rationality in logic (and perhaps
>epistemology, as well).  Indeed, even if we give up the logical position and
>pursue the course of philosophers who simply wish to account for explanatory
>laws, we are still liable to be frustrated.  The fact is that there are
>plenty of things which we do with our minds which are downright irrational,
>and that it one of the things which makes us human.

I agree with this without all the qualifiers.  The way we interact with
the world around us is not rational according to any existing symbolic
standards of logic.  These are rigid, static approximations of an
inherently dynamic entity.  Our "irrationality"--if this is to be taken
as thoughts, and their accompanying actions, that deviate from precise
predictions--is exactly what makes us human.

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) in
<3593@media-lab.MEDIA.MIT.EDU> on learning & behavior:

>some of the job is done by NNs.  And some of the job is done
>by compactly-describable procedural specifications.  Where is the
>"traditional, symbolic, AI in the brain"?  The answer seems to have
>escaped almost everyone on both sides of this great and spurious
>controversy!  The 'traditional AI' lies in the genetic specifications
>of those functional interconnections: the bus layout of the relations
>between the low-level networks.  A large, perhaps messy software is
>there before your eyes, hiding in the gross anatomy.

This appears to be a case of a hammerer seeing visions of nails. 
Personally, I see only wetware...and neural networks.



ld231782@longs.LANCE.ColoState.EDU