[comp.ai] Simulating thinking is NOT like simulating flying

roland@cochise.pcs.com (Roland Rambau) (02/08/90)

norman@cogsci.ucsd.EDU (Donald A Norman-UCSD Cog Sci Dept) writes:

->     Do not confuse simulation with reality.  A simulated airplane does not
->     actually fly: a simulated intelligence does not actually think.
->     
->The point which I address now is that a real airplane moves through the
->air, which a simulation can't do, but a real human having thoughts moves
->information, which a simulation can do.

I think You have left out an essential point, that is: Is the simulation
'realtime' ? This is IMHO _not_ an unimportant technical detail.

Consider as a gedankenexperiment some _real human_ exactly like You and
me :-)  but living on a timescale several orders of magnitude different
from ours ( So his lifespan would be either several million years, or
just parts of a second ).
  Would we accept this fictious man as an intelligent being like us,
or would we rather call it a total different physical phenomenon ?

I suppose we would _not_ call it intelligent, and so we do not call
simulations intelligent if they are orders of magnitude slow.
( Its not a coincidence that most intelligence tests are to be
performed under strong time constraints :-)

--
             I know that You believe You understand what You think I said, but
             I'm not sure You realize that what You heard is not what I meant.

Roland Rambau

  rra@cochise.pcs.com,   {unido|pyramid}!pcsbst!rra,   2:507/414.2.fidonet 
             I know that You believe You understand what You think I said, but
             I'm not sure You realize that what You heard is not what I meant.

Roland Rambau

hougen@umn-cs.cs.umn.edu (Dean Hougen) (02/09/90)

In article <1990Feb7.174646.245@pcsbst.pcs.com> roland@cochise.pcs.com (Roland Rambau) writes:
>I think You have left out an essential point, that is: Is the simulation
>'realtime' ? This is IMHO _not_ an unimportant technical detail.
>
>Consider as a gedankenexperiment some _real human_ exactly like You and
>me :-)  but living on a timescale several orders of magnitude different
>from ours ( So his lifespan would be either several million years, or
>just parts of a second ).
>  Would we accept this fictious man as an intelligent being like us,
>or would we rather call it a total different physical phenomenon ?
>I suppose we would _not_ call it intelligent, and so we do not call
>simulations intelligent if they are orders of magnitude slow.
>( Its not a coincidence that most intelligence tests are to be
>performed under strong time constraints :-)

I addressed this issue in an article in this newsgroup not too long ago,
but got no responce to it.  Since you seem to think that speed is essential,
and not just "an unimportant technical detail," I would be interested in your
response to my thought-experiment.

My thought exp:
	Put a man on a space ship traveling near the speed of light.
	During his round trip from Earth out and back, he comes up with
the theory of relativity.
	In our time (here on Earth) his trip takes several million years.
	To the man on the ship, only say 50 years have passed.
	You have just said that he was thinking too slow, and is therefor
unintelligent.  Imagine that, great thoughts, but just too slow to be 
thinking.

Well?

Dean Hougen
--
"I know whats on your mind, but its not what you think it is."  - Oingo Boingo

sticklen@cpswh.cps.msu.edu (Jon Sticklen) (02/10/90)

> My thought exp:
> 	Put a man on a space ship traveling near the speed of light.
> 	During his round trip from Earth out and back, he comes up with
> the theory of relativity.
> 	In our time (here on Earth) his trip takes several million years.
> 	To the man on the ship, only say 50 years have passed.
> 	You have just said that he was thinking too slow, and is therefor
> unintelligent.  Imagine that, great thoughts, but just too slow to be 
> thinking.
> 
> Well?
> 
> Dean Hougen
> --


the issue of relativistic effects is a red herring. all you have to
do "solve" this issue is to
	... ask the great thinker when he comes back from his
	journey how long it took him to come up with the idea.
	at that point there will be two answers: one in the
	frame of the traveler, and one in the frame of the
	earth bound observer. which one is relavent? i would suggest
 	that the only relavent frame is the travelers because a
	cessium clock (eg) would have had N ticks for the traveler
	to come up with his idea. if the traveler had stayed at
	home, the same cessium clock would have had the same number
	ticks for the idea to incubate. the process of thinking
	is not slowed down for the traveler except as effected by
	*every* physical process that the traveler undergoes; ie
	includes things that have nothing to do with thinking at
	all.

	the reason the relitivistic example is a red herring is
	that the observer of (putative) intelligent behavior should
	be in the same frame as the agent performing the behavior.








-------------------------------------------------------------
	Jon Sticklen
	Artificial Intelligence/Knowledge Based Systems Group
	Computer Science Department
	Michigan State University
	East Lansing, MI  48824-1027
	517-353-3711
	FAX: 517-336-1061
-------------------------------------------------------------

kp@uts.amdahl.com (Ken Presting) (02/10/90)

In article <1990Feb8.213856.20116@umn-cs.cs.umn.edu> hougen@umn-cs.cs.umn.edu (Dean Hougen) writes:
>In article <1990Feb7.174646.245@pcsbst.pcs.com> roland@cochise.pcs.com (Roland Rambau) writes:
>> . . .  so we do not call
>>simulations intelligent if they are orders of magnitude slow.
>>( Its not a coincidence that most intelligence tests are to be
>>performed under strong time constraints :-)
>
>My thought exp:
>	Put a man on a space ship traveling near the speed of light.
>	During his round trip from Earth out and back, he comes up with
>the theory of relativity.
>	In our time (here on Earth) his trip takes several million years.
>	To the man on the ship, only say 50 years have passed.
>	You have just said that he was thinking too slow, and is therefor
>unintelligent.  Imagine that, great thoughts, but just too slow to be 
>thinking.

Perhaps both of you would agree that speed of processing is *one*
relevant measure of intelligence.  I don't see any need to set a specific
threshold in this case.  Slow thinking is less useful than quick thinking.

roland@cochise.pcs.com (Roland Rambau) (02/20/90)

ray@bcsaic.UUCP (Ray Allis) writes:

->computer programs are simulations, not models or duplications.  A simulation
->cannot be or produce duplication (except, trivially, of another simulation). 
                                                          ^^^^^^^^^^^^^^^^^^
->Therefore a computer program cannot be or produce a mind.  

But consciousness is essentially (self-)_simulation_, so a computer program
can _duplicate_ at least consciousness. And that's the most interesting part.

--

             I know that You believe You understand what You think I said, but
             I'm not sure You realize that what You heard is not what I meant.

Roland Rambau

  rra@cochise.pcs.com,   {unido|pyramid}!pcsbst!rra,   2:507/414.2.fidonet 
--

             I know that You believe You understand what You think I said, but
             I'm not sure You realize that what You heard is not what I meant.

jgk@osc.COM (Joe Keane) (02/21/90)

Recently i've been seeing a lot of baloney getting passed off as supposedly
common sense reasoning.  Is it just me, or are other people baffled by the
amount of nonsense in this whole discussion?

No one complains that a steel mill has a `symbol grounding problem', and no
one argues that it's only simulating making steel and polluting the air.  So
why is there such a sudden change when we talk about digital computers and
reasoning?  Why do people drag out the philosophy of consciousness and the
supposed properties of `minds'?  I might point out that no one has ever proved
`minds' exist, although the word on the street is that most humans are born
with or otherwise get one, and you don't want to lose yours.

I think `symbol cruncher' is a perjorative term to machines, much like `paper
pusher' is to humans.  The implication of this term is that computers only
push things around inside themselves, without actually doing anything useful.
On the contrary, just about anyone has at least indirectly dealt with a
computer and they know that, God forbid, computers actually cause things to
happen.  Whether it's causing a train wreck or hassling someone about a $0.00
bill, computers are out there changing with the world.

Now let's get to the digital vs. analog debate.  Somehow someone got the idea
that only an analog device can be the `real' thing.  There was a lot of this
discussion when CDs first came out, and fortunately most has gone away.  There
are many technical points for and against the digital reproduction technology
used in CDs, as compared to the analog system in conventional LP records or
that in cassette tapes.  So you can say that a particular reproduction is
better or worse than another, or faithful or high fidelity.  But do you say
that the analog LP, with all its clicks and pops, is `real', while the digital
CD is `only a simulation'?  Most music fans would immediately dismiss this
argument as ridiculous.

Or consider synthesizers.  There is the old-fashioned analog type, built out
of transistors, resistors and capacitors.  In this device the changing
voltages represent rather directly the musical waveforms being produced.  Then
there's the new-fangled digital type, which by an amazing coincidence also
contains transistors, resistors, and capacitors.  In this device the voltages
still represent the waveforms although in a less direct way.  Again you can
argue technical merits of the two types, and obviously some features are
easier to implement or tend to work better in type or the other.  But are we
to believe that the analog device is a real instrument, while the digital
version is only a simulation?

Anyway, enough ranting from me today.  If someone thinks there is actually
some substance to these arguments and would like to put them in a slightly
more scientific tone, please do so.  I'm interested to see what's there.

gerry@zds-ux.UUCP (Gerry Gleason) (02/21/90)

In article <20206@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
<>From: norman@cogsci.ucsd.EDU (Donald A Norman-UCSD Cog Sci Dept)
<>This is a dangerous argument to enter for the volume of interaction is
<>high and the quality mixed.  Still, I am confused about one issue, and
<>Drew McDermott's clear and intelligent commentary has attracted me.

<This is the one where he says "[Strong AI] has the advantage of being a
<potent working hypothesis..."?  That may have been true once, for a while,
<but why should it still be accepted after all these years with zip results?

Because nothing has changed to effect its (strong AI) position as a potent
working hypothesis (positively or negatively I might add, as you say "with
zip results).  The lack of results only indicates that the subject matter
is subtle and complex.  With hindsight we can sit back and laugh at all
the early researchers who thought success was just around the corner, but
hindsight or not the abounding confidence of the early days is now clearly
naive to say the least.  But then, for us to whom cognition is a natural
process, and an invisible one for the most part since we are forced to look
at the world through it, it is not surprising that we have vastly
underestimated the scope of this cognition.

Although I found much of the rest of your posting questionable, none of
privided any evidence for or against the strong AI hypothesis.  I challenge
anyone in Searle's camp (claiming strong AI is false) to provide convincing
evidence for this claim.  This is not an invitation to put forward more
arguments like the CR, but to provide hard mathematical proofs based on
experimental evidence.  Note that the absense of such a proof would not
necessarily doom your claim, mearly put it outside the realm of science.
However if you cannot provide a proof, your camp should leave the AI
researchers alone while they continue to explore these fertile areas of
investigation.

Gerry Gleason

zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) (02/22/90)

 I think the whole debate about "conciousness" is a terrible red-herring.
 What does it really matter if a system displays "conciousness" or not,
 as long as it is capable of performing as required?

 I liken the question of "conciousness" to the question of "life".  Is a
 virus alive?  Pursuit of these questions may be a pleasant distraction,
 but they will never produce anything of value.

 These (fuzzy) terms represent concepts which simply do not 
 exist in reality.  If these terms do not reflect some aspect of
 reality, then there can never be a correct definition for them.

   -- Who cares how the black box works, as long as it works?

------------------------------------------------------------------------
These opinions are obviosly mine unless you share them.

---Paul...
 

sticklen@cpswh.cps.msu.edu (Jon Sticklen) (02/22/90)

From article <48c9f211.1a4d7@cicada.engin.umich.edu>, by zarnuk@caen.engin.umic\
h.edu (Paul Steven Mccarthy):
>
...
>    -- Who cares how the black box works, as long as it works?
>


that clearly is one approach to AI. but it is not the only one.
others of us would like to use the experience of building
"black boxes that work" as a springboard to reaching for
principles that underlie intelligent beahavior.

        ---jon---




-------------------------------------------------------------
	Jon Sticklen
	Artificial Intelligence/Knowledge Based Systems Group
	Computer Science Department
	Michigan State University
	East Lansing, MI  48824-1027
	517-353-3711
	FAX: 517-336-1061
-------------------------------------------------------------

ian@oravax.UUCP (Ian Sutherland) (02/23/90)

In article <6557@cps3xx.UUCP> sticklen@cpswh.cps.msu.edu (Jon Sticklen) writes:
>others of us would like to use the experience of building
>"black boxes that work" as a springboard to reaching for
>principles that underlie intelligent beahavior.

This is a perfectly reasonable, time-honored approach to such a
problem.  In order to apply it, however, you must first HAVE some
"black boxes that work".  Can anyone point to a single instance of
such?  It is my impression that many people in AI want to figure out
what intelligent behavior is in complete generality before they build
the "black boxes that work".  I personally think this is the wrong
approach.
-- 
Ian Sutherland		ian%oravax.uucp@cu-arpa.cs.cornell.edu

Sans Peur

rambow@grad2.cis.upenn.edu (Owen Rambow) (02/23/90)

In article <1360@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes:
>It is my impression that many people in AI want to figure out
>what intelligent behavior is in complete generality before they build
>the "black boxes that work".

Many people in comp.ai, at any rate.

Owen

Mais Avec Faim

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/23/90)

In article <48c9f211.1a4d7@cicada.engin.umich.edu> zarnuk@caen.UUCP (Paul
Steven Mccarthy) writes:
>
> I think the whole debate about "conciousness" is a terrible red-herring.
> What does it really matter if a system displays "conciousness" or not,
> as long as it is capable of performing as required?
>
There is a useful ring of pragmatics here, but I would like to respond with the
possibility that "consciousness" may be less of a red herring than Paul had in
mind.  The problem is that, just as some folks can play fast and loose with a
word like "consciousness" until they have lost all sight of why they invoked it
in the first place, others can do with same with the phrase "performing as
required."  Let me try to put in a few kind words in favor of a JUDICIOUS
use of the term consciousness.  (I should note that these thoughts have emerged
as a result of my current reading of Gerald Edelman's new book, THE REMEMBERED
PRESENT:  A BIOLOGICAL THEORY OF CONSCIOUSNESS.)

Those who talk about the technological limitations of expert systems often
gravitate to major obstacles which still exist in the domain of learning.
I would say that there is little argument that expert systems do not learn
the way people do.  We are still a far cry from an expert system which can
(figuratively) sit down with a physics textbook, read a chapter, and start
working on the problems at the end of the chapter.  Part of the problem stems
from the fact that that chapter consists not only of definitions and equations
but also of sample problems and their solutions.  I do not know about other
readers;  but, I, for one, found these essential to my education.  Working
on a new problem could always benefit from drawing upon a MODEL of some other
problem whose solution was understood.

The point I am getting at here is that my education was not a matter of
building up a "rule base" of what to do in given problem situations . . . at
least not at the level of rules for setting up and solving equations.  If there
were any rules at all, they were at a higher level and concerned drawing upon
my memories of other problem solving experiences (including passive ones
resulting from reading the textbook) and then adapting those memories to
suit my present needs.  I would argue that what I am talking about here
is an activity which is very tightly coupled to what we mean when we talk
about consciousness, since what it involves is an explicit inspection of
what may best be described as my "mental state" and subsequent manipulation
of what I find there.

Does it matter that such an activity be labeled as "consciousness?"  This
question can be argued either way.  My personal feeling is that if we are
having trouble modeling the kind of introspective problem solving based on
experience which I just cited (and now that there seems to be a major thrust
in the direction of case-based reasoning, we ARE beginning to encounter some
limitations), then we may benefit from recognizing that we are dealing with
a fundamental issue of consciousness.  Then, we can decide, from a purely
pragmatic point of view, whether any insights regarding consciousness which
have emerged from outside our own community (be they from psychology, biology,
or philosophy) might be of use to us.
>
> These (fuzzy) terms represent concepts which simply do not 
> exist in reality.  If these terms do not reflect some aspect of
> reality, then there can never be a correct definition for them.
>
Regardless of whether or not the concepts "exist in reality," the terms may
still carry some amount of informative baggage.  All I'm arguing is that we
should be able to use whatever means are at our disposal to mine information
which may benefit us.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"Only a schoolteacher innocent of how literature is made could have written
such a line."--Gore Vidal

cam@aipna.ed.ac.uk (Chris Malcolm) (02/25/90)

In article <1360@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes:

>It is my impression that many people in AI want to figure out
>what intelligent behavior is in complete generality before they build
>the "black boxes that work".  I personally think this is the wrong
>approach.

Where d'you get this impression from? Can't think of any such people
among the hundred or so AI people here. Hands up those AI researchers
who've been giving this impression!

Actually, I think I know where you get this impression from: it comes
from reading books and articles about AI written by people who know very
little about either AI or computers. There's a lot of it about. Terribly
unscientific. I blame the "education" system.

-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

mw@wuche2.wustl.edu (Montree Wongsri) (02/25/90)

In article <1344@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes:
><1990Feb13.121154.6087@pcsbst.pcs.com>
>
>In article <1990Feb13.121154.6087@pcsbst.pcs.com>, roland@cochise.pcs.com
>(Roland Rambau) writes:
>> > Imagine that, great thoughts, but just too slow to be thinking.
>> 
>> Why ? - Because speed ( time ) is an essential feature of intelligence :-)
>I think this *argument* is a little absurd! Nobody is searching for
>intelligence in the Rocky Mountains because they have no reason to
>
>Are you basing your idea that speed is an essential feature of
>intelligence on the fact that IQ tests are timed? This seems pretty
>silly, to me. The reason IQ tests are timed, in my opinion, is that
>
>
I quite agree with Daryl. It is shallow to think that 
speed is an *essential* feature of intelligence. 
A trivial example: Are you (Roland) really happy or 
convince that a small PC machine, which *essentially* computes 
even simple multiplication faster than you, an intelligence machine 
or it is smarter than you are in this regard.


	Montree

kp@uts.amdahl.com (Ken Presting) (02/27/90)

In article <1990Feb24.234005.15474@wuche2.wustl.edu> mw@wuche2.UUCP writes:
> . . .  It is shallow to think that
>speed is an *essential* feature of intelligence. 
>A trivial example: Are you (Roland) really happy or 
>convince that a small PC machine, which *essentially* computes 
>even simple multiplication faster than you, an intelligence machine 
>or it is smarter than you are in this regard.

Ooh.  Ooh.  Another chance to ride my favorite hobby-horse.

If we stop trying to define an "essence" for intelligence, and instead
define a collection of scales for comparing different aspects of
intelligence, we don't have to debate this issue at all!  (there are
plenty of other things... :-)

I propose that one aspect of intelligence is speed.

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/20/90)

In article <48c9f211.1a4d7@cicada.engin.umich.edu> zarnuk@caen.UUCP (Paul Steven Mccarthy) writes:
>
> I think the whole debate about "conciousness" is a terrible red-herring.
> What does it really matter if a system displays "conciousness" or not,
> as long as it is capable of performing as required?

Who knows what "really matters"?  Some people are interested in
consciousness and not just behavior.  That doesn't mean *you* have
to be interested.  If everyone were interested in the same things
the world would be a poorer place.

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/20/90)

In article <48c9f211.1a4d7@cicada.engin.umich.edu> zarnuk@caen.UUCP (Paul Steven Mccarthy) writes:
> I liken the question of "conciousness" to the question of "life".  Is a
> virus alive?  Pursuit of these questions may be a pleasant distraction,
> but they will never produce anything of value.

But maybe it's like the question of when humans are alive.  This
certainly matters to some people and it's hard to see why nothing
of value can be produced by considering it.

> These (fuzzy) terms represent concepts which simply do not 
> exist in reality.  If these terms do not reflect some aspect of
> reality, then there can never be a correct definition for them.

Maybe _you're_ not conscious, then?  It's one thing to say machine
consciousness might not be varifiable, quite another to say
consciousness might not reflect some aspect of reality.  It's
certainly reasonable to say it's an aspect of human reality.