[talk.philosophy.misc] Simulating thinking is NOT like simulating flying

roland@cochise.pcs.com (Roland Rambau) (02/20/90)

ray@bcsaic.UUCP (Ray Allis) writes:

->computer programs are simulations, not models or duplications.  A simulation
->cannot be or produce duplication (except, trivially, of another simulation). 
                                                          ^^^^^^^^^^^^^^^^^^
->Therefore a computer program cannot be or produce a mind.  

But consciousness is essentially (self-)_simulation_, so a computer program
can _duplicate_ at least consciousness. And that's the most interesting part.

--

             I know that You believe You understand what You think I said, but
             I'm not sure You realize that what You heard is not what I meant.

Roland Rambau

  rra@cochise.pcs.com,   {unido|pyramid}!pcsbst!rra,   2:507/414.2.fidonet 
--

             I know that You believe You understand what You think I said, but
             I'm not sure You realize that what You heard is not what I meant.

jgk@osc.COM (Joe Keane) (02/21/90)

Recently i've been seeing a lot of baloney getting passed off as supposedly
common sense reasoning.  Is it just me, or are other people baffled by the
amount of nonsense in this whole discussion?

No one complains that a steel mill has a `symbol grounding problem', and no
one argues that it's only simulating making steel and polluting the air.  So
why is there such a sudden change when we talk about digital computers and
reasoning?  Why do people drag out the philosophy of consciousness and the
supposed properties of `minds'?  I might point out that no one has ever proved
`minds' exist, although the word on the street is that most humans are born
with or otherwise get one, and you don't want to lose yours.

I think `symbol cruncher' is a perjorative term to machines, much like `paper
pusher' is to humans.  The implication of this term is that computers only
push things around inside themselves, without actually doing anything useful.
On the contrary, just about anyone has at least indirectly dealt with a
computer and they know that, God forbid, computers actually cause things to
happen.  Whether it's causing a train wreck or hassling someone about a $0.00
bill, computers are out there changing with the world.

Now let's get to the digital vs. analog debate.  Somehow someone got the idea
that only an analog device can be the `real' thing.  There was a lot of this
discussion when CDs first came out, and fortunately most has gone away.  There
are many technical points for and against the digital reproduction technology
used in CDs, as compared to the analog system in conventional LP records or
that in cassette tapes.  So you can say that a particular reproduction is
better or worse than another, or faithful or high fidelity.  But do you say
that the analog LP, with all its clicks and pops, is `real', while the digital
CD is `only a simulation'?  Most music fans would immediately dismiss this
argument as ridiculous.

Or consider synthesizers.  There is the old-fashioned analog type, built out
of transistors, resistors and capacitors.  In this device the changing
voltages represent rather directly the musical waveforms being produced.  Then
there's the new-fangled digital type, which by an amazing coincidence also
contains transistors, resistors, and capacitors.  In this device the voltages
still represent the waveforms although in a less direct way.  Again you can
argue technical merits of the two types, and obviously some features are
easier to implement or tend to work better in type or the other.  But are we
to believe that the analog device is a real instrument, while the digital
version is only a simulation?

Anyway, enough ranting from me today.  If someone thinks there is actually
some substance to these arguments and would like to put them in a slightly
more scientific tone, please do so.  I'm interested to see what's there.

gerry@zds-ux.UUCP (Gerry Gleason) (02/21/90)

In article <20206@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
<>From: norman@cogsci.ucsd.EDU (Donald A Norman-UCSD Cog Sci Dept)
<>This is a dangerous argument to enter for the volume of interaction is
<>high and the quality mixed.  Still, I am confused about one issue, and
<>Drew McDermott's clear and intelligent commentary has attracted me.

<This is the one where he says "[Strong AI] has the advantage of being a
<potent working hypothesis..."?  That may have been true once, for a while,
<but why should it still be accepted after all these years with zip results?

Because nothing has changed to effect its (strong AI) position as a potent
working hypothesis (positively or negatively I might add, as you say "with
zip results).  The lack of results only indicates that the subject matter
is subtle and complex.  With hindsight we can sit back and laugh at all
the early researchers who thought success was just around the corner, but
hindsight or not the abounding confidence of the early days is now clearly
naive to say the least.  But then, for us to whom cognition is a natural
process, and an invisible one for the most part since we are forced to look
at the world through it, it is not surprising that we have vastly
underestimated the scope of this cognition.

Although I found much of the rest of your posting questionable, none of
privided any evidence for or against the strong AI hypothesis.  I challenge
anyone in Searle's camp (claiming strong AI is false) to provide convincing
evidence for this claim.  This is not an invitation to put forward more
arguments like the CR, but to provide hard mathematical proofs based on
experimental evidence.  Note that the absense of such a proof would not
necessarily doom your claim, mearly put it outside the realm of science.
However if you cannot provide a proof, your camp should leave the AI
researchers alone while they continue to explore these fertile areas of
investigation.

Gerry Gleason