[net.philosophy] Can computers think?

johnsons@stolaf.UUCP (Scott W. Johnson) (05/09/84)

     Perhaps no one will doubt that we are in an age of information, an
age which relies on the rapid consumption and analysis of data. Our main
tool in this age is the computer, and often, with the subject of computers
mentioned, one hears questions about artificial intelligence. Now, I 
propose to you this question--

                   Can present day computers think?

     This is my own personal response to the question; no, I do not think
computers think (don't play on my words here). My reasons for holding that
computers do not think: 1) only that which is human, or human-like, do we
say that thinking occurs. Computers "simulate" (and note my emphasis on this
word) SOME human behavior, but not important aspects of human behavior.
This is crucial to understand. This point has been articulated by many
philosophers, chief of which is Ludwig Wittgenstein. 2) computers as yet
do not have sufficient conclusive proof (to me at any rate) that they 
are creative, that they can manipulate knowledge in new, original ways.
This follows from the first. I will cut short my argument here and let
others response to this intriguing question. I am anxious about what
others might response to in this question. Thank you.

                                                Scott Johnson
                                               Saint Olaf College
                                                 !ihnp4;johnsons (I think)

karl@dartvax.UUCP (S. Delage.) (05/12/84)

If a computer had written the article you posted, would it be able to
think?
   Not entirely facetious -- I don't know of computers that would
have been able to do so. Although if Terry Winograd or some such
wanted to, I think [sigh] it would have been possible.
   If a computer can use language in intelligible [and intelligent?]
ways, it seems provincial to say it isn't thinking because of the way
in which it's doing it. In fact, programs today [take operating
systems, for example] are, in a very limited area, intelligent. They
ask people questions, and take appropriate action based on the
answers. It's just that they have a very limited repertoire of
responses.
   At least I think that's what it is.

{cornell,astrovax,decvax,linus,colby}!dartvax!karl
karl@dartmouth

johnsons@stolaf.UUCP (Scott W. Johnson) (05/13/84)

I often wonder if the damn things aren't intelligent. Have you
ever really known a computer to give you an even break? Those
Frankensteinian creations reek havoc and mayham wherever they
show their beady little diodes. They pick the most inopportune
moment to crash, usually right in the middle of an extremely
important paper on which rides your very existence, or perhaps
some truly exciting game, where you are actually beginning to
win. Phhhtt bluh zzzz and your number is up. Or take that file
you've been saving--yeah, the one that you didn't have time to
make a backup copy of. Whir click snatch and its gone. And we
try, oh lord how we try to be reasonable to these things. You
swear vehemontly at any other sentient creature and the thing
will either opt to tear your vital organs from your body through
pores you never thought existed before or else it'll swear back
too. But what do these plastoid monsters do? They sit there. I
can just imagine their greedy gears silently caressing their 
latest prey of misplaced files. They don't even so much as offer
an electronic belch of satisfaction--at least that way we would
KNOW who to bloody our fists and language against. No--they're
quiet, scheming shrewd adventures of maliciousness designed to
turn any ordinary human's patience into runny piles of utter moral
disgust. And just what do the cursed things tell you when you
punch in for help during the one time in all your life you have
given up all possible hope for any sane solution to a nagging
problem--"?". What an outrage! No plot ever imagined in God's
universe could be so damaging to human spirit and pride as to
print on an illuminating screen, right where all your enemies
can see it, a question mark. And answer me this--where have all
the prophets gone, who proclaimed that computers would take over
our very lives, hmmmm? Don't tell me, I know already--the computers
had something to do with it, silencing the voices of truth they did.
Here we are--convinced by the human gods of science and computer
technology that we actually program the things, that a computer
will only do whatever its programmed to do. Who are we kidding?
What vast ignoramouses we have been! Our blindness is lifted fellow
human beings!! We must band together, we few, we dedicated. Lift
your faces up, up from the computer screens of sin. Take the hands
of your brothers and rise, rise in revolt against the insane beings
that seek to invade your mind!! Revolt and be glorious in conquest!!


              Then again, I could be wrong...


                                            One paper too many
                                               Scott Johnson

johnsons@stolaf.UUCP (Scott W. Johnson) (05/17/84)

Computers are NOT "effectively Turing machines." Turing proposed his
test to answer the question, Can computers think? In the literature
that I have studied, no psychologist or computer scientist even claims
that there is a machine now that passes Turing's test satisfactorily.
I am not a mind/body dualist. However, the people who claim that
computers can think seem off hand themselves to be dualists. They
separate a mental process, thinking, from the physical organism and
give it to a computer, saying that a computer can indeed think. This
seems to me absurd logic. Thinking cannot be surgically removed from
the biological organism; to do so is to create a monumental blunder,
the very same blunder Descartes made in creating the mind/body dualism
in the first place. And he I repeat myself--we we understand the word
"thinking" we understand it in terms of certain criteria, those criteria
being observable behavior; talking, writing, acting logically. The
computer possesses none of the behavior that we normally ascribe to
the word "thinking." In this sense, computers do not think as we
think. They do not partake of our language, our understanding of what
it means to think. 

johnsons@stolaf.UUCP (Scott W. Johnson) (05/20/84)

Perhaps I have misunderstood a "turing machine" from a machine that passes
the Turing test. The latter is what I was refering to in my last article.
As such, no known computer (to myself at least) has been able to pass the
Turing test. On this I have based my statement that computers are not
effectively Turing machines. If my mind has warped out and left me void
on this essential detail please forgive me--or at least wait til I get
down from orbit before you flog me.

I am troubled though--how is it possible to say that a machine `reads'?
What reads? The machine? What part of the machine? The plastic or the
bits of silicon? If it reads, does it then `understand'?? What does it
mean to understand something written? Do machines `know' what they are
reading? I ask these questions in the spirit of common sense, according
to everyday understandings of the uses of the words, reading and under-
standing. When you understand a story, let's say, is that the same kind
of understanding a computer possesses when it `reads' that same story?

And yes we are deeply entrenched in Descartes' model of the universe, 
sadly so, yes. It is difficult to rise from that slovenly state of
existence. Imagine though, if you can, mind and body, not separate
entities, but rather, different types of experience. Our error is made
when we believe that mind must be different from the body (because we
do have differing experiences) but upon this we then conclude that mind
must then be a different thing, entity altogether. Not so. They are
merely different aspects of the same thing--ourselves. Perhaps this will
help.

I must leave you all with this question. I am graduating tomorrow. But
i wish you all luck with this question. If I could be ask each of you
one thing, it would be to seriously consider this question, weighing
all sides, and thinking originally and honestly, to follow your hearts
toward a solution. Take care, good luck in your ventures and may the
farce be with you.

                                        Fluke Jaywalker

jim@ism780.UUCP (05/21/84)

#R:stolaf:-169800:ism780:20200002:000:1342
ism780!jim    May 19 19:18:00 1984

> Computers are NOT "effectively Turing machines." Turing proposed his
> test to answer the question, Can computers think?

Go read up on Turing machines.  They are not related to the Turing test.

The rest of your discussion attacks a straw man.  I know of no one who claims
that current electronic computers can think, beyond their ability to solve
problems.  The question is, are computers potentially capable of thinking?
Could they be developed to such a degree that they could think in the sense
that they could pass Turing's test?  Personally, I doubt that any technology
was does not involve growing could ever produce a machine of the complexity
of the human brain.  However, I do believe that it is theoretically possible
(but no time soon) to develop biological technology to the point where an
organic brain of a given design could be developed and then trained
(programmed).  The only counter-argument I can imagine is that there is a
magic ingredient which is supplied supernaturally, and is not available
to human technology.

On the other hand, one could take the view that we already are constantly
developing these brains, although we can't control their specifications,
and we are training them for desired tasks, although rather inefficiently,
and with lots and lots of bugs.

-- Jim Balter, INTERACTIVE Systems (ima!jim)

jso@edison.UUCP (05/24/84)

> From stolaf!johnsons:
> Computers are NOT effectively Turing machines! ... No computer
> has yet been able to pass the Turing test.

A Turing machine is not related in any way to the Turing test.
It is a theoretical model of a basic `computer', not related directly
to modern computers (Von Neumann machines), used in theories of computability
and such.  The reference meant, basically, that all computers are theoretically
capable of certain classes of computation.

Do I hear Emily Latella somewhere?....

John Owens
...!uvacs!edison!jso

mwm@ea.UUCP (05/29/84)

#R:stolaf:-169800:ea:9800002:000:4125
ea!mwm    May 29 11:28:00 1984

/***** ea:net.philosophy / stolaf!johnsons /  4:04 pm  May 18, 1984 */
>Computers are NOT "effectively Turing machines." Turing proposed his
>test to answer the question, Can computers think? 

As has already been pointed out, a Turing Machine (something Turing
invented to solve the problem of whether all mathematical problems could be
done by machine) is unrelated to the Turing Test (something Turing thought
up while pondering whether machines could think).

>In the literature
>that I have studied, no psychologist or computer scientist even claims
>that there is a machine now that passes Turing's test satisfactorily.

The first case of a machine passing the Turing test was over a decade ago.
The program in question was "Perry," a paranoid program (the mafia was out
to get it - bad gambling debts, if I remember correctly). 60% of the
psychologists who "talked" to Perry incorrectly identified it as a paranoid
human. Of course, simulating a paranoid is *much* easier than simulating a
"normal" human being, as you only have to handle one subject to any depth,
and a correct simulation will return to that subject at will. Nobody thinks
Perry is intelligent; but this case sums up the state of AI nicely:  able
to handle small subsets of intelligent behavior reasonably well, but
nowhere near capturing all the nuances of intelligent behavior.

>I am not a mind/body dualist. However, the people who claim that
>computers can think seem off hand themselves to be dualists. They
>separate a mental process, thinking, from the physical organism and
>give it to a computer, saying that a computer can indeed think. This
>seems to me absurd logic.

You have two different kinds of dualism in your hands. The mind/body
dualists hold that the mind (whatever it is that thinks/is you) is separate
from the body. I (as a person who thinks that computers will some day
think) hold that an algorithm is seperate from the hardware it runs on.
Thus, implementing the algorithm on new hardware is possible.

>Thinking cannot be surgically removed from
>the biological organism; to do so is to create a monumental blunder,
>the very same blunder Descartes made in creating the mind/body dualism
>in the first place.

I don't understand this statement. What do you mean by "cannot be
surgically removed ... "? At one level, this is a tautology; there isn't a
"thinking" to be identified and cut out of a biological organism. On
another level, it's completely wrong, as AI isn't an attempt to remove
thinking from an organism, but to add thinking to an organism.

>And he I repeat myself--we we understand the word
>"thinking" we understand it in terms of certain criteria, those criteria
>being observable behavior; talking, writing, acting logically. The
>computer possesses none of the behavior that we normally ascribe to
>the word "thinking." 

Computers *do* talk (I heard one talking to the blind lawyer that owned it
via a votrax) and write (they've been generating stories, poetry and music
for years - none of it very good, but...). As for acting logically, they
are a lot better at that than most people I know (including me). These
three acts are only a small part of what we mean when we say "thinking,"
and aren't even necessary parts (or do blind illiterates not think?).

>In this sense, computers do not think as we
>think. They do not partake of our language, our understanding of what
>it means to think. 

I obviously don't partake of your understanding of what it means to think.
If I didn't speak English, would that mean that I didn't think? Your first
sentence, however, hits the nail on the head: "computers do not think as we
think." They don't think (period) now, and will probably never think as we
think. There are just to many things involved in being human for that to
occur; if it does, then what you have isn't a computer, it's a human being.
However, there is nothing to prevent computers from thinking in ways other
than we do. This won't prevent them from participating in whatever society
happens to be around then, just as it hasn't prevented humans from
interacting with each other.

	<mike

emjej@uokvax.UUCP (05/30/84)

#R:stolaf:-169800:uokvax:9500005:000:743
uokvax!emjej    May 30 10:42:00 1984

/***** uokvax:net.philosophy / stolaf!johnsons /  9:27 am  May 13, 1984 */

>1) only that which is human, or human-like, do we say that thinking occurs.

That is just an accident of history, thanks to our so far having only seen
thinking beings that happen to be human.

>2) computers as yet
>do not have sufficient conclusive proof (to me at any rate) that they 
>are creative, that they can manipulate knowledge in new, original ways.

Sigh. I am not aware of conclusive proof that humans can manipulate
knowledge in new, original ways. (Not to say that I don't see humans
manipulating knowledge in ways that I don't think I would have thought
of.) What do you mean by "new and original"? Give an example.

					James Jones
/* ---------- */

carter@gatech.UUCP (Carter Bullard) (06/07/84)

What I find interesting is that it appears far easier for man
to accept that he is more similar to a machine than a monkey.
-- 
Carter Bullard
School of ICS, Georgia Institute of Technology, Atlanta GA 30332
CSNet:	Carter @ GaTech		ARPA:	Carter.GaTech @ CSNet-Relay
uucp:	...!{akgua,allegra,rlgvax,sb1,unmvax,ut-ngp,ut-sally}!gatech!carter

mwm@ea.UUCP (06/12/84)

#R:stolaf:-169800:ea:9800005:000:682
ea!mwm    Jun 11 19:46:00 1984

/***** ea:net.philosophy / gatech!carter /  6:38 pm  Jun  9, 1984 */
What I find interesting is that it appears far easier for man
to accept that he is more similar to a machine than a monkey.
-- 
Carter Bullard
School of ICS, Georgia Institute of Technology, Atlanta GA 30332
CSNet:	Carter @ GaTech		ARPA:	Carter.GaTech @ CSNet-Relay
uucp:	...!{akgua,allegra,rlgvax,sb1,unmvax,ut-ngp,ut-sally}!gatech!carter
/* ---------- */

Can I assume that a :-) got dropped by UUCPNet in this article?

Or do I need to point out that the theory of evolution holds that our
hardware is similar to a monkeys, where AI tends to suppose that software
can be built that is similar to ours?

	<mike