[comp.ai] Turing Test and Subject Bias

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/30/89)

If a system passes the Turing Test with one subject, but not with
another, should it be considered intelligent?

If 55% of a sample of hundreds say the system is intelligent, is it?

What if the subjects are:

	a) drunk or on drugs (or  a') the experimenters are :-))?
	b) mentally subnormal?
	c) polite and don't want to upset the experimenters
	   (especially if a' applies too :-])?

Then how valid is the Turing Test?

Just what sort of Science did young Mr. Turing have in mind when he
decided that subjective opinion could ever be a measure of system
performance?

How do AI types *REALLY* test their systems?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (05/31/89)

In article <3018@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>Just what sort of Science did young Mr. Turing have in mind when he
>decided that subjective opinion could ever be a measure of system
>performance?

Young Mr Turing suggested in his 1950 paper "Computing Machinery and
Intelligence" that it would be 50 years before what we now call "the
Turing Test" and he called "the imitation game" could be applied. I see
no reason to revise that estimate downwards.  He also made it clear that
he considered that everyday notions of "machinery" and "thinking" then
carried too much excess semantic baggage for reasonable discussion of
the question "can machines think" to be profitable, and consequently
suggested the "imitation game" as a Gedanken experiment with which to
clear the philosophical air.

While there have been a few occasions in AI research when people were
asked to play a Turing-like game to assess a program whose specific
purpose was to articulate a model of a certain kind of of human
behaviour, such as Colby's "PARRY", or the ranking of MYCIN's diagnostic
responses compared to a panel of experts, it remains true that the
primary purpose of the Turing Test in AI is as a gedanken experiment.
As Turing pointed out in the paper in question:

  "The popular view that scientists proceed inexorably from
  well-established fact to well-established fact, never being
  influenced by any improved conjecture, is quite mistaken.
  Provided it is made clear which are proved facts and which are
  conjectures, no harm can result. Conjectures are of great
  importance since they suggest useful lines of research."

The Turing Test was suggested by its originator as a source of useful
conjecture. It still serves that purpose, and that tradition in AI is
continued by Searle's Chinese Room argument, and Harnad's Total Turing
Test, to name two examples recently ventilated on comp.ai.

>How do AI types *REALLY* test their systems?

This question seems to suggest that AI types _pretend_ to be using
something like the Turing Test, but actually in the privacy of their
labs are up to something quite different. I thought, Gilbert, that you
had once in your career suffered some education in AI? The answer to how
we AI types test our systems is as various as the kinds of system we
build, and the reasons we build them. In many cases, as I am sure you
know, what is interesting about our systems is why they don't work :-)
That is not the kind of thing which is established by testing a system.

-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

grano@kreeta.cs.Helsinki.FI (Juhani Grano) (06/01/89)

	I think that Alain Turing did not want to define what
	'intelligence' really is (that would indeed be hard),
	but rather to provide some kind of basis for empirical
	tests, and forget the unfruitful arguing about what
	defines intelligence. The test, although inaccurate,
	does provide some information about the intelligence
	of the object being tested.
	Furthermore, would you say that a drunk person is not
	intelligent? Or a child? Or someone mentally handicapped?
	The question of what defines intelligence is and will remain
	unsolved.

--------------------
Kari Grano				University of Helsinki, Finland
email me at: grano@cs.helsinki.fi	Department of CS.

bwk@mbunix.mitre.org (Barry W. Kort) (06/01/89)

In article <3018@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) wonders:

 > How do AI types *REALLY* test their systems?

I suppose AI types test their systems much the same way teachers
test their students.  By giving them examinations consisting of
puzzles or problems appropriate to the class of intelligence which
the learning system has presumably acquired.

Neural network classifiers are given samples to classify.
Diagnostic expert systems are given the symptoms of ailments
or fault conditions.  Theorem provers are given candidate
theorems to decide.  The successful intelligent system (be
it made of silicon or made of meat) gets a passing grade and
goes ahead.  The unsuccessful ones go back for remedial
education or flunk out.  (Or maybe they just end up as USENET
junkies.)

--Barry Kort

fransvo@maestro.htsa.aha.nl (Frans van Otten) (06/01/89)

Kari Grano writes:

>I think that Alain Turing did not want to define what 'intelligence'
>really is (that would indeed be hard), but rather to provide some
>kind of basis for empirical tests, and forget the unfruitful arguing
>about what defines intelligence.  The test, although inaccurate,
>does provide some information about the intelligence of the object
>being tested.
>
>Furthermore, would you say that a drunk person is not intelligent ?
>Or a child ?  Or someone mentally handicapped ?  The question of what
>defines intelligence is and will remain unsolved.

It seems to me that the use of the word "intelligence" is rather
subjective, and the opposite of "dumbness".  It is not an absolute
property of a system or human or animal, at least not when normally
used.  I think that is why it is so hard to define "intelligence".
And a definition based on subjective perceptions like these is
probably not of much use in the field of ai.

But it might be possible to describe a property which I might call
"absolute intelligence".  This could be described by something like
this:

  1. A set of rules, like "if big angry man coming towards me
     then run away";
  2. A machine to apply these rules to input data, resulting
     in output data and/or actions.

Of course, ai languages like prolog use this kind of data
representation, but they don't seem to be as successful as humans.
On the other hand, we humans (at least I) do sometimes call a
program intelligent (or dumb).  This happens mostly when the used
algorithm is a bit complex and/or the result is surprising.  When
I know exactly how the program works, I usually don't call it
intelligent anymore.  What does this mean ?

I appreciate any comments on this random output of my brain.
-- 
Frans van Otten                     |   fransvo@maestro.htsa.aha.nl    or
Algemene Hogeschool Amsterdam       |   fransvo@htsa.uucp              or
Technische en Maritieme Faculteit   |   [[...!]backbone!]htsa!fransvo

b27y@vax5.CIT.CORNELL.EDU (06/02/89)

In article <3018@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>Then how valid is the Turing Test?
>
>Just what sort of Science did young Mr. Turing have in mind when he
>decided that subjective opinion could ever be a measure of system
>performance?
>
>How do AI types *REALLY* test their systems?

The Turing Test is not really used to test AI systems at all.

If The AI Guys today could get 55% of any (sober) group of people that
thought the computer was intellegent, they would be dancing around
the room. We are looking at a at least another few decades of research
before we have anything "smart" enough to trick anyone.

The Turing Test should be thought of as a Thought experiment:
	If you have some system that in every way seems and behaves 
	in an intellegent manner, is it intellegent, or is it
	just some very complex series of brainless conditional
	responses?

It brings up some fascinating philosophical questions.  My favorate
twist on the question:  "How do you know if the person you are 
talking to is really intellegent?"  The writer of this letter on
the Usenet may SEEM like he/she/it has reasonable mental capabilities,
(except for maybe spelling) but what if it is (I am) just some complex 
fancy generator.

If your interested in a better discussion of the Turing Test, along with
a very good counterexample (The Chinese Room example) I would read
Robert Searle's paper called something like "Minds, Brains, and Program". 
( I'm pretty sure that title is wrong).  I think I saw some discusion on 
Searle lately on the net about him either in comp.ai or somewhere else.


-----Misha------

Michael Gray   			/-------------------------------/
Misha Computing			/ "Save The Humans"		/
526 Stewart Ave 		/				/
Ithaca N.Y. 14850		/	Bumber Sticker		/
607-277-2774			/				/
B27Y@VAX5.CIT.CORNELL.EDU	/-------------------------------/

rwex@ida.org (Richard Wexelblat) (06/02/89)

Friends, go back and read Turing's paper again.  It was a semi-technical
(or perhaps non-technical) speculation on whether machines might ever be
able to think -- and how we might be able to tell if they do so.  Turing
described something he called the Artificial Game in which a man, a
woman, and an arbiter communicate by teletype.  The arbiter cannot see
either of the players, but they select between themselves which shall be
required to tell the truth and which be permitted to lie.  Then both
try through conversation and Q and A to convince the arbiter that they
are of the gender of the truth-teller.  It might well be the case that
over a broad selection of players and arbiters, there ought to be
reliable statistics of the relative success of the truth-teller and
liar.  (At least within a group of similar age, education level, social
class, nationality, etc.)


Now, program a computer to "be intelligent" and give it "experience"
sufficient to play the Artificial Game, replacing gender with human/
machine as the deciding factor.  (I.e. sometimes the human will be the
liar, sometimes the computer.)  Turing POSITED that if the win/loss
statistics for the human-computer game match those of the man-woman game
then the computer might be said to "think." Bias has nothing to do with
the test as it is statistical.  Given a large enough sample of games
played, individual bias can be made insignificant.

Please note that Turing was not stating that a machine winning the game
would be intelligent; rather he pointed to one whose win/loss
statistics were commensurate to those of a human.

My library is in transit so I can't check the original text of the
paper.  I believe, however that an equally valid interpretation of the
"Turing Test" is to leave the game roles alone but just change the
players.  That is, now a man or woman plays one side, a computer the
other.  The goal, however, is still gender, not origin.  I like this
formulation better.

Note also that the focus of the paper was methodology, not method.
There's a big difference in that.

-- 
--Dick Wexelblat (rwex@ida.org)
  703  324  5511 (Formerly: rlw@philabs.philps.com)

childers@avsd.UUCP (Richard Childers) (06/02/89)

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) babbles:

>Just what sort of Science did young Mr. Turing have in mind when he
>decided that subjective opinion could ever be a measure of system
>performance?

Gotta start somewhere. He wasn't measuring "system performance", he was
measuring the relationship between human expectations and the reality,
based upon an observed, but until then unspoken, set of rules already in
place in human interactions.

>How do AI types *REALLY* test their systems?

Well, since you know so much, why don't you tell us a good metric for
awareness, hot shit ?

Seems to me the only metric for awareness is a boolean test for awareness,
as measured by another point of awareness, with error checking carried
out by consensus reality. I know some people will object to this as a way
of generating quantifiable data, but I have never had any trouble integrating
a series of 'yes'-'no' answers into a more detailed observation, and in fact
it forms a major portion of my pool of problem-solving techniques in life.

I see nothing shameful in such an effort. It may not meet _your_ criteria,
but it meets everyone else's. Consensus reality is hard to argue with.

But if you _like_ beating your head against a wall, well ... I don't think
that's particularly suggestive of intelligence, however -- artificial or
otherwise.

There's nothing intelligent about unanswerable questions if they don't make
any contribution to the discussion at hand.


>Gilbert Cockton, Department of Computing Science,  The University, Glasgow
>	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

-- richard


-- 
 *    "We must hang together, gentlemen ... else, we shall most assuredly     *
 *     hang separately."         Benjamin Franklin, 1776                      *
 *                                                                            *
 *      ..{amdahl|decwrl|octopus|pyramid|ucbvax}!avsd.UUCP!childers@tycho     *

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/02/89)

In article <1108@hydra.cs.Helsinki.FI> grano@cs.helsinki.fi writes:
>
>	Furthermore, would you say that a drunk person is not
>	intelligent? Or a child? Or someone mentally handicapped?
>	The question of what defines intelligence is and will remain
>	unsolved.

I think you may have misunderstood my point.
The quality of a Turing test depends on the quality of the observing
subjects.  This is not true in the same way, or to the same extent,
for proper experimental investigations.  The bias here lies with the
experimenter and the sample (both are revealed in replications, or
results).

The issue for the Turing Test is: what is an acceptable sample?

If Turing didn't want to pin down intelligence, he should have used
another word.  I do not accept your version of history.  Sources?

In Turing's day, it was not as unreasonable to think of 'intelligence'
as an out-there in-agents property.

The term has no role today apart from common sense approbation.

-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (06/03/89)

From article <952@maestro.htsa.aha.nl>, by fransvo@maestro.htsa.aha.nl (Frans van Otten):
> .....
> It seems to me that the use of the word "intelligence" is rather
> subjective, and the opposite of "dumbness".  It is not an absolute
> property of a system or human or animal, at least not when normally
> used.  I think that is why it is so hard to define "intelligence".
> And a definition based on subjective perceptions like these is
> probably not of much use in the field of ai.
> 
> But it might be possible to describe a property which I might call
> "absolute intelligence".  This could be described by something like
> ........

I wish I knew more about epistemology.  I agree that the use of the
word "intelligence" is subjective.  On the one hand, definitions are
subjective: what looks like a good definition to one person looks like
a bad definition to another.  On the other hand, the Turing test is
objectively definable but is based on the subjective judgments of a
participant.  The subjectivity seems unavoidable.

I see basically two approaches.  One is to put the subjective part
first, and then if we can only agree on it (ha, ha) then we can be
objective thereafter.  The "ha, ha" is, if I may say so, the joker.
We argue and argue over definitions.

The other approach is to put the objective part first.  That's what
Turing tried to do, in a sense.  He suggested putting people and
machines in an objective setting, and letting the people do what people
do, namely, make subjective judgments.  That way, we, the observers, at
least can objectively watch other people being subjective, instead of
being lost instantly in our own subjectivity.

Turing's argument is based (I imagine) on the one thing (I imagine) we
can agree on: that intelligence is what people have.  All our
definitions are based on the belief that people are intelligent.  All
our questions are based on wanting to know whether machines can do what
people do.  We just don't all know exactly what it is that people do.

I think (ha, ha?) we can agree on that.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (06/06/89)

From article <3039@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
> .....
> In Turing's day, it was not as unreasonable to think of 'intelligence'
> as an out-there in-agents property.
> 
> The term has no role today apart from common sense approbation.

I don't want to sound stupid, but it seems to me that a statement like
that takes all the meaning out of the term "artificial intelligence."
What is artifical intelligence if intelligence is meaningless?

If I take the statement literally, it means that if I like a machine, I
can say it embodies artificial intelligence, and nobody (or anybody)
can contradict me, because my statement is totally subjective.

I really have trouble absorbing that.  I sometimes find value in
Gilbert Cockton's unconventional views, but at other times I find his
excesses excessive.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

grano@kreeta.cs.Helsinki.FI (Juhani Grano) (06/07/89)

In article <952@maestro.htsa.aha.nl> fransvo@htsa.UUCP (Frans van Otten) writes:
:But it might be possible to describe a property which I might call
:"absolute intelligence".  This could be described by something like
:this:
:
:  1. A set of rules, like "if big angry man coming towards me
:     then run away";
:  2. A machine to apply these rules to input data, resulting
:     in output data and/or actions.
:
:Of course, ai languages like prolog use this kind of data
:representation, but they don't seem to be as successful as humans.

The problem with these if .... then .... constructions is obviously that
they fail to achieve the necessary adaptability that humans have. When
something unexpected happens, a human *magically* knows the context or
'frame' which to adapt. This is of course related to experience and
knowledge about reality, but humans also seem to have the ability to
draw conclusions *very* heuristically...to err is human... It is not
clear to me whether the AI people are trying to make an intelligent machine or
a machine whose behaviour resembles that of human beings..:-)

The goal seems to run away every time we try to grasp it...I think it
only goes to show that trying to break 'intelligence' into discrete
areas is not very fruitful. Intelligence is more than the sum of the 
features/abilities that are parts of it.

------------------------------
Kari Grano				University of Helsinki, Finland
email to: grano@cs.helsinki.fi		Department of CS

grano@kreeta.cs.Helsinki.FI (Juhani Grano) (06/07/89)

In article <3039@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
:The quality of a Turing test depends on the quality of the observing
:subjects.  This is not true in the same way, or to the same extent,
:for proper experimental investigations.  The bias here lies with the
:experimenter and the sample (both are revealed in replications, or
:results).

I agree.

:The issue for the Turing Test is: what is an acceptable sample?

I think anyone can see that e.g. statistically satisfactory sample of
testees yieds null result; that is, something so general that it says
nothing. Turing must have realized that! (no sources, sorry, I just
think so :-))

:If Turing didn't want to pin down intelligence, he should have used
:another word.  I do not accept your version of history.  Sources?

I haven't read the original paper, but according to a book of mine
(sorry again, it's finnish..) "Turing wrote, that the idea was to make
questions about the intelligence of machine uninteresting, NOT respond
to them." That also sounds reasonable to me - Turing wasn't an idiot.

:In Turing's day, it was not as unreasonable to think of 'intelligence'
:as an out-there in-agents property.

If I understood that...where is your sense of history? Take a look at
any book on the history of philosophy and see what's been said about
intelligence. Forty years is not that much...

------------------------------
Kari Grano				University of Helsinki, Finland
email to: grano@cs.helsinki.fi		Department of CS

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/07/89)

In article <1174@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>What is artifical intelligence if intelligence is meaningless?
Exactly - can't we just get back together and have everyone just work
on computer systems, and put away this silly distinction of AI versus
non-AI systems?  Where does it get us?

>I really have trouble absorbing that.  I sometimes find value in
>Gilbert Cockton's unconventional views, but at other times I find his
>excesses excessive.

It is the idea that intelligence is a definable, measurable property
which is a perversion.

I am unconventional here, but not in much larger academic subcultures than
than miniscule AI community.  I suggest you look at the intelligence
debate in psychometrics, and Herb Simon's "Sciences of the Artificial"
- as someone in touch with psychologists, he has better sense than to
want to use such a term as AI.

If you are *SERIOUSLY* interested in this question, there is an
enormous amount of work in psychometrics and educational psychology on
this.

Remember, IQ tests were originally devised to keep idiots out of the
French infantry.  Today they only confirm that armies have to take them
anyway:-)

-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (06/07/89)

In article <3039@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>In Turing's day, it was not as unreasonable to think of 'intelligence'
>as an out-there in-agents property.
>
>The term has no role today apart from common sense approbation.
>

Would you say the same sort of thing about 'consciousness'? Knowledge?
Belief? 

-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (06/07/89)

In article <3039@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>The quality of a Turing test depends on the quality of the observing
>subjects.  This is not true in the same way, or to the same extent,
>for proper experimental investigations. 

This is an excellent point. Why, only last week, when one of my research
assistants came up to me saying "I've been playing with the latest
version of the program for a week now, and I still can't tell the
difference between it and Harry!" I had to explain to her - yet again! -
the proper design of psychological experiments.

Come on! What are we talking about? Turing suggested a gedanken
experiment he doesn't think would in practice be possible for 50 years,
there's no good reason to contract that estimate, and Gilbert is
criticising the design of the experiment as though it were commonplace
practice??
-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/08/89)

In article <407@edai.ed.ac.uk> cam@edai (Chris Malcolm) writes:
>In article I write
>>
>>The term has no role today apart from common sense approbation.
>
>Would you say the same sort of thing about 'consciousness'? Knowledge?
>Belief? 

Of course not.  These are still very productive terms (e.g. race
relations, Drucker's theories on knowledge workers, the Rushdie debate
over respect for beliefs).

Intelligence is dead outside of AI.  Look at the psychometrics
literature of the 1970s.

Is anyone in psychology still studying intelligence and its measurement?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/08/89)

In article <408@edai.ed.ac.uk> cam@edai (Chris Malcolm) writes:
>Come on! What are we talking about? Turing suggested a gedanken
>experiment he doesn't think would in practice be possible for 50 years,

So it isn't a test then?

>there's no good reason to contract that estimate, and Gilbert is
>criticising the design of the experiment as though it were commonplace
>practice??

So what is the common practice?
Again, how *DO* AI types test their systems?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

Gordon@ucl-cs.UUCP (06/08/89)

In article <3039@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>The quality of a Turing test depends on the quality of the observing
>subjects.  This is not true in the same way, or to the same extent,
>for proper experimental investigations. 

[followed by comment from Chris Malcolm.]

My two cents worth is
``Inteligence is the the mind of the beholder.''

Gordon.

cww@cadre.dsl.PITTSBURGH.EDU (Charles William Webster) (06/09/89)

In article <3075@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>It is the idea that intelligence is a definable, measurable property
>which is a perversion.
>
>I am unconventional here, but not in much larger academic subcultures than
>than miniscule AI community.  I suggest you look at the intelligence
>debate in psychometrics, and Herb Simon's "Sciences of the Artificial"
>- as someone in touch with psychologists, he has better sense than to
>want to use such a term as AI.

My, my, my.  Aren't we superior!  You'll fool more of the people more
of the time if you actually read the sources you superciliously cite
(or at least represent them undistortedly).

In "Sciences of the Artificial" Simon says:

"At any rate, "artificial intelligence" seems here to stay, and it may prove
easier to cleanse the phrase than to dispense with it.  In time it will become
sufficiently idiomatic that it will no longer be the target of cheap
rhetoric."

Simon may not pepper his writing with the phrase "artificial intelligence"
but many of his students and collaborators are artificial intelligence
reseachers, and he isn't nearly as mealy-mouthed about them as you are.

>
>-- 
>Gilbert Cockton, Department of Computing Science,  The University, Glasgow
>	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

Chuck

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (06/09/89)

In article <1119@hydra.cs.Helsinki.FI> grano@cs.helsinki.fi writes:
> It is not
>clear to me whether the AI people are trying to make an intelligent machine or
>a machine whose behaviour resembles that of human beings..:-)
>

Both extremes are research goals being pursued in AI. There are also some
humble(?)  AI people who are trying to make machines whose behaviour
resembles that of insects. 
-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) (06/10/89)

In article <281@ucl-cs.UUCP>, Gordon@ucl-cs.UUCP writes:
> My two cents worth is
> ``Inteligence is the the mind of the beholder.''

See? Machines can't make errors of this type [sic]
You are most definitely human!
-- 
...................................................................
Andrew Palfreyman	I should have been a pair of ragged claws
nsc!logic!andrew	Scuttling across the floors of silent seas
...................................................................

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (06/10/89)

In article <3079@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>Again, how *DO* AI types test their systems?

I suppose it depends on the system.  I never heard of anyone using the
Turing test, except perhaps with ELIZA in a modified way.

INTERNIST was tested by giving it published tough cases that were also
given to human experts.  The program's results were compared to the those
of the humans.  Also, it was tested one-on-one against chairmen of departments
of medicine when Dr. Myers was visiting their institutions for grand rounds.

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (06/13/89)

In article <3079@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>So what is the common practice?
>Again, how *DO* AI types test their systems?

As I've said before, in ways as various as the intended capabilities of
the systems. For example, (you're not going to like this!) I'm
developing a system which can plan how to assemble shapes out of parts.
How do I test it? I tell it the shapes of the parts, the shape to build,
and then watch the robot build the shape (or fail, as the case may be).
The criterion is simple and indubitable. I cannot imagine there ever
being any dispute about whether or not the robot succeeded (except
trivial borderline pedantries).

By developing I mean that I'm trying to extend the capabilities of the
system. It is not a complicated system; there are probably thousands of
ways in which it could be built. What is interesting is that some ways
are very simple, whereas others are very complex. What is even more
interesting is why, i.e., the interesting research questions concern
good (simple, economical) architectures for building systems capable of
successful thought and action in a world.

-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

bwk@mbunix.mitre.org (Barry W. Kort) (06/14/89)

In article <3075@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:

 > It is the idea that intelligence is a definable, measurable property
 > which is a perversion.

Perhaps I am a bit perverted, but, when *I* use the word, "intelligence",
I mean "the ability to think and solve problems".

I define a "problem" as "an undesired state of affairs for which
an appropriate idea has not yet been generated or agreed upon."

I define "idea" as "a possibility for changing the state of affairs."

I define "thinking" as "a rational form of information processing
which reduces the entropy or uncertainty of a knowledge base, generates
solutions to outstanding problems, and conceives goal-oriented courses
of action."

--Barry Kort

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/19/89)

In article <56041@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>Perhaps I am a bit perverted, but, when *I* use the word, "intelligence",
>I mean "the ability to think and solve problems".
Wow, that's tight!  What if I can only solve some of your problems?
What if I'm brilliant at some, and moderte at others?

We can talk of intelligent behaviour (nearly always post-hoc), but
never general intelligence - this was backed up by the psychometric
work too, although that 'g' factor didn't always factor out.  'g' could
be perceptual speed, confidence, insight, what have you, but it in no
way guarantees success at an arbitrary problem.

I'd love to staff a MacDonald's for a day completely with MENSA types
to see how their IQ scores prepared them for all the problems of
fast-food service :-)

>I define a "problem" as "an undesired state of affairs for which
>an appropriate idea has not yet been generated or agreed upon."
Subjective, moral?  Will AI solve all the world's problems?

>I define "idea" as "a possibility for changing the state of affairs."
Thus many ideas aren't.

>I define "thinking" as "a rational form of information processing
>which reduces the entropy or uncertainty of a knowledge base, generates
>solutions to outstanding problems, and conceives goal-oriented courses
>of action."
And thus much brain life isn't thinking.

What else do you want to proscribe?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/19/89)

In article <56041@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>In article <3075@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
>(Gilbert Cockton) writes:
>
> > It is the idea that intelligence is a definable, measurable property
> > which is a perversion.
>
>Perhaps I am a bit perverted, but, when *I* use the word, "intelligence",
>I mean "the ability to think and solve problems".
>
>I define a "problem" as "an undesired state of affairs for which
>an appropriate idea has not yet been generated or agreed upon."
>
>I define "idea" as "a possibility for changing the state of affairs."
>
>I define "thinking" as "a rational form of information processing
>which reduces the entropy or uncertainty of a knowledge base, generates
>solutions to outstanding problems, and conceives goal-oriented courses
>of action."
>
>--Barry Kort


Let us leave aside issues of perversion (and perversity) and go back to
Gilbert's original remark.  Note that, whether or not we accept "I define"
as constituting a valid definition, Barry as left untouched the word
"measurable," which I, for one, find to be a critical part of Gilbert's
observation.  Barry, do you wish to comment on how you would work measurement
into your criteria?

Actually, Gilbert's remark is very much in sympathy with what Minsky says about
intelligence in THE SOCIETY OF MIND:

	A term frequently used to express the myth that some single
	entity or element is responsible for the quality of a person's
	ability to reason.

(Besides, I prefer "myth" to "perversion.")  After all, if you can isolate it
as a single entity, then you have a leg up on being able to define or measure
it.  (You still may not succeed, of course.  We still don't do a very good job
when it comes to defining "chair.")

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"For every human problem, there is a neat, plain solution--and it is always
wrong."--H. L. Mencken

jwi@lzfme.att.com (Jim Winer @ AT&T, Middletown, NJ) (06/28/89)

Barry Kort writes:

> >Perhaps I am a bit perverted, but, when *I* use the word, "intelligence",
> >I mean "the ability to think and solve problems".

Gilbert Cockton comments:

> Wow, that's tight!  What if I can only solve some of your problems?
> What if I'm brilliant at some, and moderte at others?
...
> I'd love to staff a MacDonald's for a day completely with MENSA types
> to see how their IQ scores prepared them for all the problems of
> fast-food service :-)

Jim Winer adds:

It's *not fair* to *actually listen* to what somebody is saying and then
comment on it. It's also *not nice* to pick on MENSA types and others
with congenital defects.
.
But it's fun.

Jim Winer ..!lzfme!jwi 

Those persons who advocate censorship offend my religion.
        Pax Probiscus!
        Unable to reply to email sucessfully.
        The opinions expressed here are not necessarily  

bwk@mbunix.mitre.org (Barry W. Kort) (06/29/89)

In article <8683@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP
(Stephen Smoliar) writes:

 > In article <56041@linus.UUCP>  bwk@mbunix (Barry Kort) writes:

 > > In article <3075@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
 > > (Gilbert Cockton) writes:
 
 > > > It is the idea that intelligence is a definable, measurable
 > > > property  which is a perversion.
 
 > > Perhaps I am a bit perverted, but, when *I* use the word,
 > > "intelligence",I mean "the ability to think and solve problems".
 
 > > I define a "problem" as "an undesired state of affairs for which
 > > an appropriate idea has not yet been generated or agreed upon."
 
 > > I define "idea" as "a possibility for changing the state of affairs."
 
 > > I define "thinking" as "a rational form of information processing
 > > which reduces the entropy or uncertainty of a knowledge base, generates
 > > solutions to outstanding problems, and conceives goal-oriented courses
 > > of action."
 
 > Let us leave aside issues of perversion (and perversity) and go back to
 > Gilbert's original remark.  Note that, whether or not we accept "I define"
 > as constituting a valid definition, Barry has left untouched the word
 > "measurable," which I, for one, find to be a critical part of Gilbert's
 > observation.  Barry, do you wish to comment on how you would work
 > measurement into your criteria?
  
Fair question.  And one to which I have not given much thought. 
Intelligence appears to be a multi-dimensional attribute.  Some
psychologists have identified as many as seven distinct kinds of
intelligence.  So if intelligence is measurable, the measure would
probably have to be given as a vector.  (We already know that
conventional tests of scholastic achievement distinguish verbal
intelligence from mathematical intelligence.) 

But if there is a measure of intelligence, it would have to be based
on ability to consistently solve problems of varying categories and
levels of difficulty and complexity.  In this sense, intelligence is
really measured in terms of achievement, relative to the population.
Some kinds of intelligence, such as artistic creativity, or social
skills are difficult to measure with any degree of precision.

 > Actually, Gilbert's remark is very much in sympathy with what Minsky
 > says about intelligence in THE SOCIETY OF MIND:
 
 > 	A term frequently used to express the myth that some single
 > 	entity or element is responsible for the quality of a person's
 > 	ability to reason.
 
I agree that intellience is not a single entity, but as Minsky suggests,
an agglomeration of interworking subsystems.

 > (Besides, I prefer "myth" to "perversion.")  After all, if you can
 > isolate it as a single entity, then you have a leg up on being able
 > to define or measure it.  (You still may not succeed, of course. 
 > We still don't do a very good job when it comes to defining "chair.")
  
Like "Chair", Intelligence is the name of a fuzzy set.  Candidate
systems have varying degrees of membership in the category of
Intelligent Systems.

 > "For every human problem, there is a neat, plain solution--and it
 > is always wrong."--H. L. Mencken
 
Wasn't it Lao Tse (or maybe Chang Tse) who said, "Think about
right and wrong, and one immediately falls into error."?
 
--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (06/29/89)

In article <3118@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:

 > In article <56041@linus.UUCP> bwk@mbunix (Barry Kort) writes:

 > > Perhaps I am a bit perverted, but, when *I* use the word,
 > > "intelligence", I mean "the ability to think and solve problems".

 > Wow, that's tight!  What if I can only solve some of your problems?
 > What if I'm brilliant at some, and moderate at others?

Then you have a mixed score on the intelligence vector.

Incidently, Intelligence includes the ability to learn, discover,
and create.  One can learn a specific method appropriate to an
unfamiliar class of problems, or one can devise a novel method
to solve a new class of problems.

 > We can talk of intelligent behaviour (nearly always post-hoc), but
 > never general intelligence - this was backed up by the psychometric
 > work too, although that 'G' factor didn't always factor out.  'G' could
 > be perceptual speed, confidence, insight, what have you, but it in no
 > way guarantees success at an arbitrary problem.
  
The best method I know of for solving the arbitrary problem is the
Socratic Method.  At the very least, it leads one to boundary between
one's knowledge and ignorance.

 > I'd love to staff a MacDonald's for a day completely with MENSA types
 > to see how their IQ scores prepared them for all the problems of
 > fast-food service :-)

I wonder if they would do better than Hamburger Helpers staffing
our universities and think tanks.

 > > I define a "problem" as "an undesired state of affairs for which
 > > an appropriate idea has not yet been generated or agreed upon."

 > Subjective, moral?  Will AI solve all the world's problems?
  
I imagine there will be some teamwork between silicon-based systems
and carbon-based systems.

 > > I define "idea" as "a possibility for changing the state of affairs."

 > Thus many ideas aren't.
  
In the Calculus of Ideas, the goal is to generate and select the ideas
which, when applied to the Real World, successfully transform us from
the Present State to the Goal State.  If our World Models are sufficiently
accurate, we stand a chance of achieving this level of competence.  

 > > I define "thinking" as "a rational form of information processing
 > > which reduces the entropy or uncertainty of a knowledge base, generates
 > > solutions to outstanding problems, and conceives goal-oriented courses
 > > of action."

 > And thus much brain life isn't thinking.
  
Correct.  By antonymy, I define "worrying" as an emotional form of
information processing which fails to reduce the entropy or uncertainty
of a knowledge base, fails to generate solutions to outstanding problems,
or fails to conceive goal-oriented courses of action.

 > What else do you want to proscribe?

Domestic violence, state terrorism, and child abuse.

--Barry Kort

dmocsny@uceng.UC.EDU (daniel mocsny) (07/07/89)

In article <58052@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
> In article <3118@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
> (Gilbert Cockton) writes:
>  > Wow, that's tight!  What if I can only solve some of your problems?
>  > What if I'm brilliant at some, and moderate at others?
> 
> Then you have a mixed score on the intelligence vector.

I almost wish we did not have one word "intelligence." Just as the
Greeks had three words for what we call "love," one word can't contain
everything that jams into the concept of what minds do.

Consider the far simpler problem of characterizing the performance of
a computer system. If I take two comparable workstations, one built
around an Intel 80386 and the other containing a Moto 68030, and I ask
the simple question: "Which system is more powerful?" The answer is
another question: "At what task?" Until we know how to unambiguously
characterize our artifacts, we can hardly get a handle on ourselves.

To accurately benchmark a computer system, you need some quantitative
expression that contains terms accounting for the performance of every
subsystem constituting the system. Then you need to be able to express
exactly how a particular task makes demands on those subsystems. I
don't see any obvious way to make this procedure any simpler than just
running the task on the target system and watching the wall clock.

>  > I'd love to staff a MacDonald's for a day completely with MENSA types
>  > to see how their IQ scores prepared them for all the problems of
>  > fast-food service :-)
> 
> I wonder if they would do better than Hamburger Helpers staffing
> our universities and think tanks.

Whenever we have a group of people involved in some competitive
environment with some fairly solid performance metric (e.g., getting
through engineering school, learning to fly combat aircraft, playing a
musical instrument), we find that their scores usually describe
something like a normal distribution. People responsible for managing
large enterprises obviously want to have some way to predict
individual performance in whatever skills they demand. How convenient
if this predictor were to be a scalar as easy to discuss as
"intelligence." However, actual performance is the only valid test, as
no artificial testing procedure can accurately account for all the
factors.

We don't yet have much of an idea how given real-world problems make
demands on our wetware. If we did, we might be able to isolate our
brains' subsystems, attempt to characterize their performances, and
then try to draw some conclusions about how well the parts might work
together. But this seems absurdly beyond what we can meaningfully
discuss just now.

Dan Mocsny
dmocsny@uceng.uc.edu

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (07/09/89)

From article <1415@uceng.UC.EDU>, by dmocsny@uceng.UC.EDU (daniel mocsny):
> In article <58052@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
>> In article <3118@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
>> (Gilbert Cockton) writes:
> ....
>>  > I'd love to staff a MacDonald's for a day completely with MENSA types
>>  > to see how their IQ scores prepared them for all the problems of
>>  > fast-food service :-)
>> 
>> I wonder if they would do better than Hamburger Helpers staffing
>> our universities and think tanks.

I know I'm not answering Daniel Mocsny, who (I think) finessed the
MacDonald's question.  I think Gilbert Cockton asked it.  But the
question doesn't have to be finessed.

IQ runs in my family.  My children did well in the best schools and not
so well in others.  The two who best fit that mold also do well in
menial jobs.  They don't do MacDonald's, but they do very well waiting
on tables in restaurants, after hours at college.  One earned money as
a farmhand when she was too young to get any other job.  Of course they
needed training.  Staffing "for a day" doesn't make sense.  But they
trained better than their friends, and stayed on the job longer.

I don't know about Mensa types - I quit Mensa because I didn't need
what that organization offered - but IQ does not by itself disqualify
you from hard work.  It just qualifies you to get a better job.

Unfortunately, it's a necessary qualification, not sufficient.  People
look at machines, etc. and say: "that can't be intelligent."  So
machines, etc. don't get a chance to show what they can do.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

jps@cat.cmu.edu (James Salsman) (07/10/89)

In article <2037@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:

> IQ runs in my family.

Please do not interpolate that idea:  if you do, then
you will be running the risk of

 ENTRY     racism (RAY'siz'uhm) n.
 SYLLABLES ra-cism
 MEANING   1. The notion that one's own ethnic stock is superior.  2.
           Discrimination or prejudice based on racism.

I enjoy working with large collections of on-line texts.

:James
::chgrp
-- 

:James P. Salsman (jps@CAT.CMU.EDU)

dhw@itivax.iti.org (David H. West) (07/10/89)

In article <5453@pt.cs.cmu.edu> jps@cat.cmu.edu (James Salsman) writes:
]In article <2037@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
]
]> IQ runs in my family.
]
]Please do not interpolate that idea:  if you do, then
]you will be running the risk of
]
] ENTRY     racism (RAY'siz'uhm) n.

Marty's statement is potentially refutable, and hence capable of
refinement into a scientific hypothesis.  It is thoroughly
reprehensible to suggest that such a thing should be rejected by
purely political criteria.  We are descending into barbarity quite
fast enough already, thank you.

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (07/11/89)

From article <5453@pt.cs.cmu.edu>, by jps@cat.cmu.edu (James Salsman):
> In article <2037@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
> 
>> IQ runs in my family.
> 
> Please do not interpolate that idea:  if you do, then
> you will be running the risk of
> 
>  .... racism ...  1. The notion that one's own ethnic stock is superior.  2.
>            Discrimination or prejudice based on racism.
> 
> I enjoy working with large collections of on-line texts.
> 
> :James
> ::chgrp
> -- 
> 
> :James P. Salsman (jps@CAT.CMU.EDU)

I don't know what to make of that.  I think it was sent in anger,
because it doesn't make sense.  And it looks like an attempt at a
public insult.  I hope it is not.  Public insult is demeaning to a
professional newsgroup.

I didn't ask for special treatment, nor ask anyone to deny fairness to
anyone else.  I said family, not race.  I made no reference to ethnic
stock.  I expressed no prejudice.  I expressed an observation.  I
admire my father's intelligence, which was not fully reflected in the
job he held because religious prejudice kept him out of a better one. 
I admire my late mother's intelligence.  I enjoy the company of
intelligent, fair-minded people.

Please, somebody help me.  What should I have said instead of what I
did say?  Or is something bugging Mr. Salsman?

Is there perhaps something wrong with the notion that intelligence is a
heritable trait?  Or a skill teachable by parents to children?

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

jps@cat.cmu.edu (James Salsman) (07/12/89)

In article <2061@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
> From article <5453@pt.cs.cmu.edu>, by jps@cat.cmu.edu (James Salsman):
> > In article <2037@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
> > 
> > > IQ runs in my family.
> > 
> > Please do not interpolate that idea:  if you do, then
> > you will be running the risk of
> > 
> > racism ...  1. The notion that one's own ethnic stock is superior.
> 
> I don't know what to make of that.  I think it was sent in anger,
> because it doesn't make sense.  And it looks like an attempt at a
> public insult.  I hope it is not.

Goodness, I was certainly not trying to be offensive in any way,
but the logical extention of one's family is one's race, and
if any person makes that *interpolation* of the concept, they
are racist.  You are not, and I am sorry if I offended you.

I apologize for not making clear that I was trying to provide
and example of a value-system conflict that I was reffering to
in my "Value Systems for AI" post a few days back.

I don't argue that IQ runs in families, but it is also
heavily dependant on environmental factors, and everyone
should be given an equal oppertunity for eduction,
regardless of their family or race.

:jps

--

:James P. Salsman (jps@CAT.CMU.EDU)
-- 

:James P. Salsman (jps@CAT.CMU.EDU)

Gordon@ucl-cs.UUCP (07/22/89)

>> ``Inteligence is the the mind of the beholder.''

What I meant to type was ``Intelligence is in the mind of the beholder.''

Presumably I was understood the first time.... any machines reading this?

Gordon.

Gordon@ucl-cs.UUCP (07/22/89)

> Both extremes are research goals being pursued in AI. There are also some
> humble(?)  AI people who are trying to make machines whose behaviour
> resembles that of insects. 

Agreed. A few years ago AI people had done rocks and were working our
way up to the bacteria.

Gordon.

"Robust code not insight into intellignce" - Mike Lesk.

krobt@nova.UUCP (Robert Klotz) (08/03/89)

In an article of <22 Jul 89 11:49:11 GMT>, Gordon@ucl-cs.UUCP writes:

 " 
 " .... any machines reading this?
 " 
 yes, many, however i am sure not many are understanding.

...robert--  
------
{att!occrsh|dasys1|killer|uokmax}!mom!krobt | argue for your limitations
  or   --------------                       | and soon you will find that
    krobt%mom@uokmax.ecn.uoknor.edu         | they are yours.