[comp.ai] Defining Machine Intelligence.

lishka@uwslh.UUCP (Fish-Guts) (11/20/88)

In article <1111@dukeac.UUCP> sbigham@dukeac.UUCP (Scott Bigham) writes:
>In article <401@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>>I believe that artificial intelligence
>>is possible, but that machine intelligence will probably *NOT*
>>resemble human intelligence...
>
>So how shall we define machine intelligence?  More importantly, how will we
>recognize it when (if?) we see it?
>
>						sbigham

     A good question, to which I do not have a good answer.  I *have*
thought about it quite a bit, though ... however, I haven't come up
with much that I am satisfied with.  Here is what my current lines or
thought are on this subject:

     Many (if not most) attempts at definitions of "machine
intelligence" relate it to "human intelligence." However, I have yet
to find a good definition of "human intelligence" that is less vague
than a dictionary's definition.  It would seem (to me at least) that
AI scientists (as well as scientists in many other fields) have yet to
come up with a good, working definition of "human intelligence" that
most will accept.  Rather, most AI people I have spoken with
(including myself ;-) have a vague notion of what "human intelligence"
is, or else have definitions of "human intelligence" that relies on
many personal assumptions.  I still do not think that the AI community
has developed a definition of "human intelligence" that can be
universally presented in an introductory course on AI.  It is no
wonder, then, that there is no commonly accepted definition of machine
intelligence (which would seem to be a crucial definition in AI, IMHO).

     So how do we define machine intelligence?  I propose that we
define it apart from human intelligence at first, and try to relate it
to human intelligence afterwards.  In my opinion, machine intelligence
does not have to be the same as human intelligence (and probably will
not), for reasons I have mentioned in other articles.  From what I
have read here, I believe that at least a few other people in this
group also feel this way. 

     First, the necessary "features" of machine intelligence should be
discussed and decided upon.  It is important that this be done
*without* considering current architectures and AI knowledge; the
"features" should be for an ideal "machine intelligence," and not
geared towards something that can be achieve in fifty years.  Also,
human intelligence should be *considered* at this point, but not used
as a *basis* for defining machine intelligence; intelligence in other
beings (mammals, birds, insects, rocks (;-), whatever) should also be
considered. 

     Second, after having figured out what we want machine
intelligence to be, we should then try and come up with some good
"indicators" that could be used to tell whether an AI system exhibits
machine intelligence.  These indicators can include specific tests,
but I have a feeling that tests for any form of intelligence have
never been very good indicators (note that I do not put that much
value on IQ tests as measures of intelligence).  Indicators of
intelligence in humans and other beings should be considered here as
well (i.e. what do we feel is a good sign that someone is intelligent?).

     After all that is done (and it may never get done ;-), then we
can try and compare it to human intelligence.  Chances are the two
definitions of intelligence (for machines and humans) will be
different.  Of course, if, in looking at human intelligence, some
important points of machine intelligence have been missed, then
revisions are in order ... there is always time to revise the
definition.

      I am sorry that I could not provide a concrete definition of
what machine intelligence is.  However, I hoped I have provided a
small framework for discussions on how to go about defining machine
intelligence.  And of course all the above is only my view on the
subject, and is subject to change; do with it what you will ... if you
want to print it up and use it as bird-cage liner, well that is fine
by me ;-)

					.oO Chris Oo.-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

		 "I'm not aware of too many things...
		  I know what I know if you know what I mean"
		    -- Edie Brickell & the New Bohemians

ok@quintus.uucp (Richard A. O'Keefe) (11/20/88)

In article <404@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>     Many (if not most) attempts at definitions of "machine
>intelligence" relate it to "human intelligence." However, I have yet
>to find a good definition of "human intelligence" that is less vague
>than a dictionary's definition.  It would seem (to me at least) that
>AI scientists (as well as scientists in many other fields) have yet to
>come up with a good, working definition of "human intelligence" that
>most will accept.  Rather, most AI people I have spoken with
>(including myself ;-) have a vague notion of what "human intelligence"
>is, or else have definitions of "human intelligence" that relies on
>many personal assumptions.  I still do not think that the AI community
>has developed a definition of "human intelligence" that can be
>universally presented in an introductory course on AI.  It is no
>wonder, then, that there is no commonly accepted definition of machine
>intelligence (which would seem to be a crucial definition in AI, IMHO).

I think it is useful to bear in mind that "intelligence" is a _social_
construct.  We can identify particular characters which are associated
with it, and we may be able to measure those.  (For example, one of the
old intelligence tests identified knowing that Crisco (sp?) is a cooking
oil as a component of intelligence.)  It is _NOT_ the responsibility of
AI people to define "human intelligence".  It is the job of sociologists
to determine how the notion of "intelligence" is deployed in various
cultures, and of psychologists to study whatever aspects turn out to be
based on mental characteristics of the individual.

The field called "Machine Intelligence" or "Artificial Intelligence" is
something which originated in a particular related group of cultures and
took the "folk" notion of "intelligence" as its starting point.  We wave
our hands a bit, and say "you know how smart people are, and how dumb
machines are, well, we want to make machines smarter."  At some point we
will declare victory, and whatever we have at that point, _that_ will be
the definition of "machine intelligence".  ("Intelligent" is already used
to mean "able to perform the operations of a computer", so is "smart" in
the phrase "smart card".)

Let's face it, 13th century philosophers didn't have a definition of "mass",
"potential field", "tensor", or even "hadron" when they started out trying
to make sense of motion.  They used the ordinary language they had.  The
definitions came _last_.

There are at least two approaches to AI, which may be caricatured as
(1) "Let's build a god"
(2) "Let's build amplifiers for the mind"
I belong to the second camp:  I don't give a Continental whether we end
up with "machine intelligences" or not, just so long as we end up with
cognitive tools which are far more intelligible to humans than what we
have now.  For the first camp, the possibility of "inhuman" machine
intelligences is of interest.  It would definitely be a kind of success.
For the second camp, something which is not close enough to the human
style to be readily comprehended by an ordinary human would be an utter
failure.

We are still close enough to the beginnings of AI (whatever that is) that
both camps can pursue their goals by similar means, and have useful things
to say to each other, but don't confuse them!

lishka@uwslh.UUCP (Fish-Guts) (11/23/88)

In article <713@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In article <404@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>
>I think it is useful to bear in mind that "intelligence" is a _social_
>construct.  We can identify particular characters which are associated
>with it, and we may be able to measure those.  (For example, one of the
>old intelligence tests identified knowing that Crisco (sp?) is a cooking
>oil as a component of intelligence.)  It is _NOT_ the responsibility of
>AI people to define "human intelligence".  It is the job of sociologists
>to determine how the notion of "intelligence" is deployed in various
>cultures, and of psychologists to study whatever aspects turn out to be
>based on mental characteristics of the individual.

     Yes, there will never be one "correct" measure of intelligence.
I am *not* proposing that AI people define "human intelligence."
However, in light of the fact that many AI people want to make
"intelligent" machines, I feel there should be some form of definition
or criteria for defining "artificial" or "machine" intelligence.

>The field called "Machine Intelligence" or "Artificial Intelligence" is
>something which originated in a particular related group of cultures and
>took the "folk" notion of "intelligence" as its starting point.  We wave
>our hands a bit, and say "you know how smart people are, and how dumb
>machines are, well, we want to make machines smarter."  At some point we
>will declare victory, and whatever we have at that point, _that_ will be
>the definition of "machine intelligence".  ("Intelligent" is already used
>to mean "able to perform the operations of a computer", so is "smart" in
>the phrase "smart card".)

     Unfortunately, there is already some of this going around.
I have heard claims that "Expert Systems" and "Chess Programs" embody
true intelligence in machines (usually by people who do not understand
the insides of the systems); I refuse to believe these claims.  Some
people used to think that the program "Eliza" (or "Doctor") was
"intelligent."  There are many programs sold on the market that claim
to use "Artificial Intelligence," in order to sell more copies.
Without some sort of objective criteria, it is rather hard to decide
that something is "Artificially Intelligent"; this seems to be the
basis for test such as "the Turing Test."  I am only arguing for a
*common* definition among AI researchers, so at least they can agree
(at some fairly basic level) what it is they are doing.

>Let's face it, 13th century philosophers didn't have a definition of "mass",
>"potential field", "tensor", or even "hadron" when they started out trying
>to make sense of motion.  They used the ordinary language they had.  The
>definitions came _last_.

     However, I would assume that 13th century philosophers had not
named their field yet.  AI researchers, on the other hand, have
adopted the name "Artificial Intelligence" without having a common
definition.  We should not be using the label if we cannot define it.
Maybe the solution, then, is to throw the name away, and use a
different label (something less pretentious than "Artificial
Intelligence").  However, since many people in the field seem to be
aiming towards true "machine intelligence," we may as well define the
term and use it (IMHO).

>There are at least two approaches to AI, which may be caricatured as
>(1) "Let's build a god"
>(2) "Let's build amplifiers for the mind"
>I belong to the second camp:  I don't give a Continental whether we end
>up with "machine intelligences" or not, just so long as we end up with
>cognitive tools which are far more intelligible to humans than what we
>have now.

     There is at least one more approach as well:

(3) "Let's build a machine that can reason about its surroundings,
     and is aware of itself and its relation to the surrounding
     environment"

This machine does not have to be a God, and it does not need to
amplify our minds.  Instead, it can have "intelligence" that is unique
to itself, and may be able to perform types of reasoning which are
completely foreign to human beings (and it may not! ;-).  I see the
relationship of humans to this sort of machine as being the same sort
as the relationship between humans and other animals: different ways
of thinking (probably) and different ways of looking at the world.  Of
course there would be some differences.  I am of this third group.

>We are still close enough to the beginnings of AI (whatever that is) that
>both camps can pursue their goals by similar means, and have useful things
>to say to each other, but don't confuse them!

     Good advice.  Each area of study has its benefits and drawbacks;
I think that each area can learn from the other, and have held this
view for quite a while.  But this is true of all forms of science and
other areas (such as religion) as well, IMHO.

				.oO Chris Oo.
-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

		 "I'm not aware of too many things...
		  I know what I know if you know what I mean"
		    -- Edie Brickell & the New Bohemians

fransvo@htsa (Frans van Otten) (11/29/88)

In article <405@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>     There is at least one more approach as well:
>
>(3) "Let's build a machine that can reason about its surroundings,
>     and is aware of itself and its relation to the surrounding
>     environment"

How would you define 'being aware of (itself, its surroundings, etc)' ?

-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/01/88)

In article <622@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes:

 > In article <405@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:

 > > (3) "Let's build a machine that can reason about its surroundings,
 > >      and is aware of itself and its relation to the surrounding
 > >      environment"

 > How would you define 'being aware of (itself, its surroundings, etc)' ?

An artificial sentient being would have sensor systems (e.g. vision,
audition, olfaction) through which it would acquire data about its
environment.  It would integrate this sensory data into an internal
map or model.  One of the objects in the system's environment is
the sentient being, itself, so it would need to represent itself as 
one part of the world, too.  With this map, it can navigate through the
environment and interact with other objects (including other sentient
beings).

--Barry Kort

lishka@uwslh.UUCP (Fish-Guts) (12/08/88)

In article <42361@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>In article <622@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes:
>
> > In article <405@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>
> > > (3) "Let's build a machine that can reason about its surroundings,
> > >      and is aware of itself and its relation to the surrounding
> > >      environment"
>
> > How would you define 'being aware of (itself, its surroundings, etc)' ?
>
>An artificial sentient being would have sensor systems (e.g. vision,
>audition, olfaction) through which it would acquire data about its
>environment.  It would integrate this sensory data into an internal
>map or model.  One of the objects in the system's environment is
>the sentient being, itself, so it would need to represent itself as 
>one part of the world, too.  With this map, it can navigate through the
>environment and interact with other objects (including other sentient
>beings).

[Sorry I didn't respond earlier, but I've been bogged down with school]

     Mr. Kort's explanation is close to what I had in mind.  I think
could be considered "aware" if it was able to realize it was a
distinct object in the environment and that it was different from
other objects (including other machines just like itself).  Now, in
the tradition of "recursive" definitions (as in many dictionaries), I
haven't defined what "realization" is for a machine.  In this case I
think "realization" would be some sort of proof that it could give to
us that showed it was indeed a distinct and unique object (or being,
or sentient, or whatever you want to call it).  This last definition
seems like YATTV (Yet Another Turing Test Variant).  It's amazing how
these explanations degenerate so quickly! ;-)

     I hope that answers the original posters question.

					.oO Chris Oo.
-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

		 "I'm not aware of too many things...
		  I know what I know if you know what I mean"
		    -- Edie Brickell & the New Bohemians

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/10/88)

In article <408@uwslh.UUCP> lishka@uwslh.UUCP (Christopher Lishka) writes:

 > I think [an intelligent machine]
 > could be considered "aware" if it was able to realize it was a
 > distinct object in the environment and that it was different from
 > other objects (including other machines just like itself). ... I
 > think "realization" would be some sort of proof that it could give to
 > us that showed it was indeed a distinct and unique object (or being,
 > or sentient, or whatever you want to call it).

I would be convinced if, upon acquiring language skills, the intelligent
machine unexpectedly uttered the assertion, "I am."

--Barry Kort

bph@buengc.BU.EDU (Blair P. Houghton) (12/12/88)

In article <42835@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>
>I would be convinced if, upon acquiring language skills, the intelligent
>machine unexpectedly uttered the assertion, "I am."

It's unfortunately remarkable that there are very few humans who understand
what you mean by this, many fewer who are capable of proving it to themselves
without once seeing it done.

We've been around for a few million years, and it was only DesCartes in the
past 150 years who figured out the concise method of verifying
self-existence.

Your test is more a demonstration of exemplary intelligence than just a
proof of intelligence.  It would certainly convince me, too; but I would
be convinced by something less.  Like if somebody took one of Sejnowski's
speech-synthesizers, fed it morally slanted sentences in training, then
fed it a morally slanted sentence it had never seen, with gaps in the
moral content, and it got the right word in the gap.  That, to me, would
seem intelligent.

Don't ask me what "intelligence" is.  I've got enough trouble defining
resistors in Magic extractions.

				--Blair
				  "Whaddya mean, 'one node is
				   two nodes'?"

anderson@secd.cs.umd.edu (Gary Anderson) (12/12/88)

>In article <408@uwslh.UUCP> lishka@uwslh.UUCP (Christopher Lishka) writes:
>
> > I think [an intelligent machine]
> > could be considered "aware" if it was able to realize it was a
> > distinct object in the environment and that it was different from
> > other objects (including other machines just like itself). ... I
>

In article <42835@linus.UUCP> bwk@mbunix (Barry Kort) writes:

>I would be convinced if, upon acquiring language skills, the intelligent
>machine unexpectedly uttered the assertion, "I am."
>

INTRODUCTORY JOKE:
How would we know it wasn't a bug in the code, or a virus?:-)




MANY DIFFERENT TYPES OF INTELLIGENCE => NO SINGLE DEFINITION:


I think that there are many different types of intelligence and,
consequently, many different metrics for
intelligence in humans and machines.
I think intelligence is a matter of degree, not an all or nothing concept.
I would tie a definition of intelligence in any given context to effectiveness
in achieving specific goals.
Whether the goals  are man made or machine generated, I think they provide 
a useful point of departure for measuring the intelligence of 
a given entity (man or machine) in that context.
Since human kind often have different goals even within the same context, 
I would not expect all humans to agree on whether or not a particular entity
was behaving intelligently enough to be called "intelligent" in any particular
context.

Given my  perspective, you can see why I
I don't think one can write down a general definition of intelligence which
would be acceptable to all reasonable persons.


WHAT IF SELF AWARENESS IS IN YOUR DEFINITION:

But if ones definition of intelligence requires consciousness,
self awareness, and free will, I wonder if such an intelligence,
even if observed in a lab would still be called artificial.

Suppose my granddaughter's science project pipes up and says
	 "I am".

My first reaction will be,
	"Come on stop pulling the old man's leg".

If my granddaughter  convinces me that she had not explicitly programmed
the computer to behave that way, I will suspect that she unwittingly
included code which led to the behavior, and I can set out to discover 
which directives led to the behavior.

If I find them, my granddaughter has unintentionally programmed the behavior,
and I am again secure, perhaps without any real justification,
that the machine will not now attempt to carry out a nuclear first strike.

If I don't find them, I think that it is just a natural part of the aging 
process, and ask my daughter to find them. She would of course be free to
ask others for help if she needs it.

If no one can find code which led to the behavior, then perhaps we have
found a machine with self awareness, and perhaps free will, but I don't
see how I can call this behavior artificial, any more than I can call
my granddaughter artificial.

I expect my granddaughter to have consciousness, free will, and self awareness
even if I have no real way to verify this.
Yet, even though I am taking no
small part in her creation and development,
I feel awkward taking credit for her free 
will and self awareness.


MY QUESTION:


In what sense can a researcher take credit for (claim as artificial)
any consciousness which he were able to observe in whatever artifact he
should create?


CONCLUDING JOKE:

If the goal of artificial intelligence is to create entities with 
consciousness, free will, and self awareness, there is an alternative
method which is a lot easier to master than LISP or Prolog.



-- 
              Gary S. Anderson               | Macondo was already a fearful
                                             | whirlwind of dust and rubble ...
              +-+-+-+-+-+-+-+-+-+-+-+-+      | when Aureliano ... began to 
      email:  anderson@secd.cs.umd.edu       | decipher the instant that he
 U.S. Snail:  University of Maryland         | was living ... Before reaching
              Department of Economics        | the final line [of the
              Room 3147c Tydings Hall        | parchments], he  understood
              College Park, MD 20742         | that ... . Everything 
      Voice:  (301)-454-6356                 | written on them was unrepeatable
----------------------------------------------since time immemorial and forever
more because races condemned to one hundred years of solitude did not have a
second opportunity on earth.  (Gabriel Garcia Marquez)

caasnsr@nmtsun.nmt.edu (Clifford Adams) (12/13/88)

In article <42835@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>I would be convinced if, upon acquiring language skills, the intelligent
>machine unexpectedly uttered the assertion, "I am."

	Would you be convinced if a machine wrote the above after
reading comp.ai?  I have never heard a person simply say "I am."
without any challenge or suggestion to do so.  Would it be fair to let
the machine read comp.ai?  Or "attend" a philosophy lecture?  Perhaps
the machine does not believe that the statement is adequate, or has a
different philosophy of existence.

	On another track...

	I define intelligence as "the ability to solve problems."
Finding that a solution is impossible/impractical also counts.  Now
"ability" and "problems" need to be defined.  This simple definition
describes fairly well what many people call intelligence.
Intelligence is always (in my experience) "measured" by
problem-solving abilities.  The problems vary, but solutions are
usually required.  Solutions to trivial problems use trivial
intelligence.  More complex problems require "more" intelligence.
Adding two numbers needs trivial intelligence.  "Intelligent"
activities, or ones which are needed to pass the Turing test, are more
difficult.

	I define consciousness as "the ability to create problems."
The consciousness uses intelligence like a person uses a computer.
Problems are fed into the intelligence for a solution.  The answers
can then be used to find more problems to solve.

	[Without consciousness we would be like mobile plants.  You
	 live, you die.  No problem.  :-)]

>--Barry Kort
-- 
 Clifford A. Adams    ---    "I understand only inasmuch as I become."
 ForthLisp Project Programmer        (Goal: LISP interpreter in Forth)
 caasnsr@nmt.edu            ...cmcl2!lanl!unm-la!unmvax!nmtsun!caasnsr
 (505) 835-6104 | US Mail: Box 2439 Campus Station / Socorro, NM 87801

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/13/88)

I very much enjoyed reading Gary Anderson's fantasy about his
grandaughter's science project.  Gary posits the scenario which
Christoperh Lishka and I suggested about a pre-conscious AI system
unexpectedly asserting, "I am."

In article <14949@mimsy.UUCP> anderson@secd.cs.umd.edu (Gary Anderson) writes:

 > INTRODUCTORY JOKE:
 > How would we know it wasn't a bug in the code, or a virus?:-)

It's not a bug.  It's a feature.  :-)

 > If no one can find code which led to the behavior, then perhaps we have
 > found a machine with self awareness, and perhaps free will, but I don't
 > see how I can call this behavior artificial, any more than I can call
 > my granddaughter artificial.

Aha.  Perhaps the distinction between carbon-based consciousness
and silicon-based consciousness is not in the nature of the
consciousness, and not in the nature of the artificiality.  Perhaps
the distinction is just in the nature of the underlying molecular substrate.

 > I expect my granddaughter to have consciousness, free will,
 > and self awareness  even if I have no real way to verify this.

My only comment would be to change the word "have" to the word "achieve".

 > Yet, even though I am taking no
 > small part in her creation and development,
 > I feel awkward taking credit for her free 
 > will and self awareness.

Perhaps it would be fair to say that you empowered her to achieve
those goals, but the achievement belongs to her.

 > MY QUESTION:
 > 
 > In what sense can a researcher take credit for (claim as artificial)
 > any consciousness which he were able to observe in whatever artifact he
 > should create?

Rather than "take credit", perhaps the researcher would be content
to enjoy the fruits of his labor.  

 > CONCLUDING JOKE:
 > 
 > If the goal of artificial intelligence is to create entities with 
 > consciousness, free will, and self awareness, there is an alternative
 > method which is a lot easier to master than LISP or Prolog.

True.  It's also easier to screw up.  Perhaps some of us would like
to make our mistakes on silicon substrates before we tackle the
creation of the next generation of carbon-based life forms.

--Barry Kort

mmt@client1.dciem.dnd.ca (Martin Taylor) (12/14/88)

>
>I would be convinced if, upon acquiring language skills, the intelligent
>machine unexpectedly uttered the assertion, "I am."
>
>--Barry Kort

Careful with your committments, there.  In 1956 I was working on a computer
programmed to play checkers (yes, with graphics), which was supposed to
type out its communications such as "Your Move", "You have two minutes
to move"  "If you do not move within one minute, I shall claim the game."
One day, the teletype line became noisy, and in the midst of plenty of
cartoon-type swearing (#$$8N%#...etc), it said "Od's blood I shall claim
the game."  A little earlier, it had said "Swounds".  Would you be
convinced it was acquiring a mediaeval intelligence?
-- 
Martin Taylor (mmt@zorac.arpa ...!uunet!dciem!mmt) (416) 635-2048
If the universe transcends formal methods, it might be interesting.
     (Steven Ryan).

hall@nvuxh.UUCP (Michael R Hall) (12/23/88)

Martin Taylor writes:
>>[Barry Kort writes:]
>>I would be convinced if, upon acquiring language skills, the intelligent
>>machine unexpectedly uttered the assertion, "I am."
>>
>>--Barry Kort
>
[Stuff about 1956 checkers program omitted.]
>One day, the teletype line became noisy, and in the midst of plenty of
>cartoon-type swearing (#$$8N%#...etc), it said "Od's blood I shall claim
>the game."  A little earlier, it had said "Swounds".  Would you be
>convinced it was acquiring a mediaeval intelligence?

In 1979 I built a little speaker for my Commodore Pet PC. Once when
I was playing around with creating beeps, the speaker suddenly put forth
a great amount of static and then the words came, "Call me
King's Rook" in perfectly human-like male speech - then silence. I
was a bit shocked, and so apparently was my PC, because it had froze
up (it was not normal for it to crash, so it was not just
coincidental.)  Sadly, these were both the first and the last words
it ever spoke.

On the surface, it seemed like the PC had proclaimed not only "I
am", but had named itself as well.  However, I surmized at the time
that the speaker and/or computer circuits had picked up a nearby CB
broadcast.  (Anyone know if that is possible?) I was only 14 at the
time, and in retrospect I think this event may have started my
interest in AI.

Michael R. Hall (hall%nvuxh.UUCP@bellcore.COM  OR  bellcore!nvuxh!hall)