[comp.ai] The Success of AI

tsmith@gryphon.CTS.COM (Tim Smith) (10/16/87)

There is one humbling sense in which the work in AI in the
past 20 or so years will help considerably in the ultimate
understanding of human intelligence. If you look at concepts
of the brain in the recent past, you see that whatever was
the most current technological marvel served as a metaphor
for the brain. In the early 20th century the brain was a
telephone exchange. After WWII, the systems organization
metaphor was often used (the brain was a large corporation,
with a CEO, VPs, directors, etc.).

It wasn't until computers came along that there was a
metaphor for the brain powerful enough to be taken seriously.
Once people started to try to imitate their brains on
computers, some limitations became apparent. Interestingly
enough, the limitations are not so much in the technological
metaphor as in the present concept of the brain, or of the mind
in general.

There is no reason, in principle, that a very powerful
digital computer cannot imitate a mind, *as long as a mind
is some kind of abstract logic machine*.  What AI has
discovered (though it is very unwilling to admit it) is that
this Cartesian (or even Platonic) concept of the mind is
hopelessly inadequate as a basis for understanding human
intelligence!

To conceive of the human mind as a disembodied logic machine
seemed like a great breakthrough to scientists and
philosophers. If it was this, it could be studied and
understood. If it wasn't this, then any scientific study of
the mind (hence, of intelligence) appeared to be fruitless.

The success rate in AI research (as well as most of cognitive
science) in the past 20 years is not very encouraging.
Predictions, based on very optimistic views of the problem
domain, have not been met. A few successful spin-offs have
occurred (expert systems, better programming tools and
environments), but in general the history is one of failure.
Computers do not process natural language very well, they cannot
translate between languages with acceptable accuracy, they
cannot prove significant, original mathematics theorems.

What AI researchers and other cognitive scientists now have to
face is fairly clear evidence that simulations of human
intelligence, where human intelligence is modelled as a
disembodied logic machine, are doomed to fail. Better hardware
is not the solution.  Connection machines or simple silicon
neural nets are not the answer. A better concept of "mind" is
what is needed now. This is not to say that AI research should
halt, or that computers are not useful in studying human
intelligence. (They are indispensable.) What I think it does
mean is that one or more really original theoretical paradigms
will have to be developed to begin to address the problems.

One possible source of a new way of thinking about the problems
of modelling human intelligence might be found in a revolution
that is beginning in the cognitive sciences. This revolution is
of course not accepted by most cognitive scientists; many are
not even aware of it. It is difficult to characterize the
revolution, but it essentially rejects the Cartesian dualism of
mind and body, and recognizes that an adequate description of
human intelligence must take into account aspects of human
physiology, experience, and belief that cannot *now* be modelled
by simple logic (e.g., programs).  For one example of this new
way of thinking, see the recent book by the linguist George
Lakoff, entitled "Women, Fire, and Dangerous Things."  (Neither
the book nor the title are frivolous.)

I believe the great success of AI has been in showing that
the old dualistic separation of mind and body is totally
inadequate to serve as a basis for an understanding of human
intelligence.
-- 
Tim Smith
INTERNET:     tsmith@gryphon.CTS.COM
UUCP:         {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP:         {philabs, trwrb}!cadovax!gryphon!tsmith

eric@snark.UUCP (Eric S. Raymond) (10/18/87)

In article <1922@gryphon.CTS.COM>, tsmith@gryphon.CTS.COM (Tim Smith) writes:
> Computers do not process natural language very well, they cannot
> translate between languages with acceptable accuracy, they
> cannot prove significant, original mathematics theorems.

I am in strong agreement with nearly everything else you say in this article,
especially your emphasis on a need for a new paradigm of mind. But you are,
I think, a little too dismissive of some real accomplishments of AI in at
least one of these difficult areas.

Doug Lenat's Amateur Mathematician program was a theorem prover equipped with
a bunch of heuristics about what is 'mathematically interesting', essentially
methods for grinding out interesting generalizations and combinations of known
theorems. Lenat fed it the Zermelo-Frankel set theory axioms and let it run.

After n hours of chugging through a lot of nontrivial but already-known
mathematics, it 'conjectured' and then proved a bunch of new results on the
number-theoretic properties of Pythagorean triples (3-tuples of integers of 
the form <x, y, sqrt(x**2 + y**2)>).

I was a theoretical mathematician at the time I saw the AM paper. It was
*fascinating*. The program could probably have done a lot more, but it
eventually choked on the size of its own LISP data structures.

So at least one of your negative assertions is incorrect.

I never heard of this line of research being followed up by anyone but
Doug Lenat himself, and I've never been able to figure out why. He later
wrote a program called EURISKO that (among other things) won that year's
Trillion-Credit Squadron tournament (this is a space wargame related to
the _Traveller_ role-playing game) and designed an ingenious fundamental
component for VLSI logic. I think all this was in '82.

> I believe the great success of AI has been in showing that
> the old dualistic separation of mind and body is totally
> inadequate to serve as a basis for an understanding of human
> intelligence.

Correct. But while recognizing this, let's not lose sight of the real
accomplishments of AI in the purely-symbolic domain (whatever happened to
Steve Harnad, anyhow?).

I think AI has the same negative-definition problem that "natural philosophy"
did when experimental science got off the ground -- that once people get a
handle on some "AI" problem (like, say, playing master-level chess or automated
proof of theorems) there's a tendency to say "oh, now we understand that; it's
*just* computation, it's not really AI" and write it out of the field (it would
be interesting to explore the hidden vitalist premises behind such thinking).

So at any given time the referents for AI in peoples' minds are failures and
unproved speculations, and the field goes through these manic-depressive cycles
as it regroups around a new theory, problem or technology, explores it enough
to make it useful for others, and then loses it to the rest of the world.

Case in point: in the 1950s, *compilers* were considered "AI". I'm not old
enough to remember that, but some of you may be. So, don't throw out the
ship with the bath water -- er, that is, don't give up the baby -- er, oh,
*you* know what I mean. AI is a useful category not in spite of all the
ambiguity and confusion and excitement that surrounds it, but *because* of
that.
-- 
      Eric S. Raymond
      UUCP:  {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax,vu-vlsi}!snark!eric
      Post:  22 South Warren Avenue, Malvern, PA 19355    Phone: (215)-296-5718

spe@SPICE.CS.CMU.EDU (Sean Engelson) (10/18/87)

Keywords:



Given a sufficiently powerful computer, I could, in theory, simulate
the human body and brain to any desired degree of accuracy.  This
gedanken-experiment is the one which put the lie to the biological
anti-functionalists, as, if I can simulate the body in a computer, the
computer is a sufficiently powerful model of computation to model the
mind.  I know, for example, that serial computers are inherently as
powerful computationally as parallel computers, though not as
efficient, as I can simulate parallel processing on essentially serial
machines.  So we see, that if the assumption that the mind is an
inherent property of the body is accepted, we must also accept that a
computer can have a mind, if only by the inefficient expedient of
simulating a body containing a mind.

	-Sean-
--
Sean Philip Engelson			I have no opinions.  
Carnegie-Mellon University		Therefore my employer is mine.
Computer Science Department	
----------------------------------------------------------------------
ARPA: spe@spice.cs.cmu.edu
UUCP: {harvard | seismo | ucbvax}!spice.cs.cmu.edu!spe

ed298-ak@violet.berkeley.edu (Edouard Lagache) (10/19/87)

	Anyone interested in the question of A.I. success (or lack of it)
	should have a look at Hubert Dreyfus's work.  He has written two
	books which are critical of present A.I. methodologies, and make
	a purswasive argument for why present approaches to A.I. won't
	work.

	The books are:

		What Computers Can't Do; the Limits of Artificial Intelligence
		(Harper & Row, 1979)

		Mind over Machine; The Power of Human Intuition and Expertise
		in the Era of the Computer
		(co-authored with Stuart Dreyfus and Tom Athanasiou, The
		 Free Press, 1986).

	It perhaps goes without saying that Hubert Dreyfus is one of the 
	most disliked persons of A.I. researchers.  However, no one in this
	field can really afford to not be aware of Dreyfus's concerns.

						Edouard Lagache
						School of Education
						U.C. Berkeley
						lagache@violet.berkeley.edu

brian@ut-sally.UUCP (Brian H. Powell) (10/19/87)

In article <228@snark.UUCP>, eric@snark.UUCP (Eric S. Raymond) writes:
> Doug Lenat's Amateur Mathematician program was a theorem prover equipped with
> a bunch of heuristics about what is 'mathematically interesting',
> [...]
> 
> After n hours of chugging through a lot of nontrivial but already-known
> mathematics, it 'conjectured' and then proved a bunch of new results on the
> [...]

     I feel compelled to challenge this, but not necessarily the rest of your
article.
     AM wasn't a theorem prover.  From the July, 1976 dissertation:

7.2.2 Current Limitations

[...]
AM has no notion of proof, proof techniques, formal validity, heuristics for
finding counterexamples, etc.  Thus it never really establishes any conjecture
formally.
			---end of excerpt---

     The dissertation goes on to briefly suggest ways of adding this
capability, but as I understand it, no one ever has.  Lenat himself, as I
recall, thought it was more interesting to do more work towards heuristics
than proving.  EURISKO was the result of that.  (i.e., you might get more
power if you could spend part of your time conjecturing heuristics in addition
to conjecturing about particular problems.)
     AM is a neat program, but by many views it's overrated.  It's great that
it conjectures all these neat theorems, but my impression is that it does
quite a bit of floundering to find them.  I think it also spends a lot of time
floundering without finding anything useful, also.  (A program run isn't
guaranteed to think of something clever.)  Finally, it's not clear that the
program is really intelligent enough to realize that it's just conjectured
something intelligent.  (I would bet that there are a lot of things AM has
considered uninteresting that humans consider interesting, and vice-versa.)
     A human can monitor AM and modify the priority of certain tasks if the
human feels AM is studying the wrong thing.  A human is practically required
for this purpose if AM is to do something especially clever.  This turns AM
more into a search tool than an autonomous program, and I don't think a tool
is what Lenat had in mind.
     If you read the summaries of AM, you think it's powerful.  Once you read
the entire dissertation, you realize it's not quite as great a program as you
had thought, but you still think it's good research.

Brian H. Powell
		UUCP:	...!uunet!ut-sally!brian
		ARPA:	brian@sally.UTEXAS.EDU

srp@ethz.UUCP (Scott Presnell) (10/20/87)

In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:

>
>Given a sufficiently powerful computer, I could, in theory, simulate
>the human body and brain to any desired degree of accuracy.  This

Horse shit.  The problem is you don't even know exactly what you are
simulating!  I suppose you could say it's all a problem of definition,
however even with your assumtion that the mind is a integral part of the
body I still claim that you don't know what you're simulating.  For
instance, dreams, are they logical?, do they fall in a pattern?, a computer
has got to have them to be a real simulation of a body/mind, but you cannot
simulate what you cannot accurately describe.

Let's get down to a specific case:
I propose that given any amount of computing power, you could not presently,
and probably will never be able to simulate me: Scott R. Presnell.
My wife can be the judge.  

This may sound reactionary, that's because that's the way I responded
internally to this first statement.  I apologize if I've jumped into a
discussion too quickly, I don't have time to read the previous discussions
right now.  

Scott Presnell 						Organic Chemistry
Swiss Federal Institute of Technology  (ETH-Zentrum)
CH-8092 Zurich, Switzerland.
uucp:seismo!mcvax!cernvax!ethz!srp (srp@ethz.uucp); bitnet:Benner@CZHETH5A

eric@snark.UUCP (Eric S. Raymond) (10/20/87)

In article <9320@ut-sally.UUCP>, brian@ut-sally.UUCP (Brian H. Powell) writes:
>      I feel compelled to challenge this, but not necessarily the rest of your
> article.
>      AM wasn't a theorem prover.  From the July, 1976 dissertation:

Thanks for the correction, which I also received by email from another comp.ai
regular. I never saw Lenat's dissertation, just an expository paper in one of
journals. I guess maybe the reason I thought the sucker had a theorem prover
attached was that I was working on LISP support for a theorem prover at the
time, and my associative memory got a collision in its hash tables :-).

Nevertheless, I think my more general observations about AI's definitional
problem remain valid. Compilers are a 'success' of AI. So are heuristic-based
search-and-backtrack algorithms. So is the visual analysis preprocessing used
in seeing pick-and-place robots. So (most recently) are 'expert systems'.
In *each case*, these problem areas were defined out of the AI field as soon
as they spawned halfway-usable technologies and acquired their own research
communities.

I think the same thing is about to happen to neural nets, BTW...
-- 
      Eric S. Raymond
      UUCP:  {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax,vu-vlsi}!snark!eric
      Post:  22 South Warren Avenue, Malvern, PA 19355    Phone: (215)-296-5718

smoliar@vaxa.isi.edu (Stephen Smoliar) (10/20/87)

Those who would like a taste of the Dreyfus style before embarking upon one of
his books in its entirely would do well to consult the Summer 1986 issue of
IEEE EXPERT.  The article "Why Expert Systems Do Not Exhibit Expertise,"
by Hubert and Stuart Dreyfus, is an excerpt from MIND OVER MACHINE:  THE
POWER OF HUMAN INTUITION AND EXPERTISE IN THE ERA OF THE COMPUTER.  While
there is definitely merit to deflating exaggerated claims about expert systems
which have been made in the name of salesmanship, Hubert Dreyfus approaches
this issue as a philosopher.  Consequently, the technical baggage he carries
is often not particularly timely and often inadequate.  Were he to wage his
campaign on the battelground of the philosophy of mind, he might come away
with some notable victories;  but by descending to the level of technology,
he often falls into traps of misconception.

Here is a sample passage:

	Humans often think by forming images and comparing them
	holistically.  This process is quite different from the
	logical, step-by-step operations that logic machines
	perform.

There are several things wrong here.  First of all, a holistic theory of
memory or reasoning remains a HYPOTHESIS.  Claiming it as an observation
is a gross misrepresentation of the surrent state of cognitive science.
Second, the term "logic machine" has been introduced to capture a particular
machine architecture which lacks what Dreyfus wants it to lack.  He does
not admit of the possibility of an alternative architecture for the
mechanization of thought which could model the holistic hypothesis.
Fortunately, more productive cognitive scientists HAVE pursued this
line of reasoning.

In any event, the text continues in an attempt to elaborate upon this point:

	For instance, human beings use images to predict how certain
	events will turn out.

This is, again, hypothesis.  It rests on a weaker hypothsis which is never
cited:  that human beings use MODELS to predict how certain events will
turn out.  This is the whole "mental models" approach to cognition, for
which there is both subtantial literature and experiments in mechanical
implementation.

The text continues:

	Programming a computer to analyze a scene has turned out to
	be very difficult.  Such programs require a great deal of
	computation, and they work only in special cases with objects
	whose characteristics the computer has been programmed to
	recognize in advance.

Nevertheless, such programs may work better than people in those special
cases and can be used in factories.  That is why industrial robotics has
become as effective as it has.  I regard this as an instance of the situation
I raised regarding perpetual motion machines in an earlier note.  I raised
the point that had Bessler's machine actually been put to work and found
to run for significantly long periods of time without energy input, it
would have been an impressive contribution even if its energy dissapated
very slowly, rather than not at all.  Similarly, we would do better to
study special cases of scene analysis which are successes rather than
belabor the obstacles to a more general approach to the task.

It gets better:

	But that is just the beginning of the problem.  Computers
	currently can make inferences only from lists of facts.
	It's as if to read a newspaper you had to spell out each
	word, find its meaning in the dictionary and diagram every
	sentence.

This strikes me as a gross misrepresentation of mechanical reasoning, and
I think the crux of this misrepresentation is a confusion between reasoning
and representation.  Fortunately, there are other philosophers who appreciate
that these are distinct issues;  but they don't seem to attract as much
attention as Dreyfus.

One last jab in parting:

	However, a computer cannot recognize emotions such as anger
	in facial expressions, because we know of no way to break
	down anger into elementary symbols.  Therefore, logic machines
	cannot see the similarity between two faces that are angry.
	Yet human beings can discern the similarly almost instantly.

This strikes me as another example of sloppy thinking.  Are we talking
about a GEDANKEN experiment here?  If so, how are we to define it?
Are we looking at faces out of context in an attempt to infer emotion?
If so, then I would claim that humans are nowhere near as good as is
claimed.  Indeed, man has been notorious for misreading emotion.  The
lack of this skill has probably perpetrated many major historical events.
Seymour Papert used to accuse Dreyfus of committing the "superhuman human"
fallacy by assuming that an artrificial intelligence would surpass a human
one.  Here is a situation where Dreyfus hasd gone out on a limb which he
should have left alone.  Our understanding of how PEOPLE exhibit and
perceive emotion is sufficiently weak that, for the most part, artificial
intelligence seems to have to good sense to leave it in peace.

smoliar@vaxa.isi.edu (Stephen Smoliar) (10/20/87)

In article <9320@ut-sally.UUCP> brian@ut-sally.UUCP (Brian H. Powell) writes:
>     If you read the summaries of AM, you think it's powerful.  Once you read
>the entire dissertation, you realize it's not quite as great a program as you
>had thought, but you still think it's good research.
>
Actually, Lenat and John Seely Brown did something rather like this when they
wrote the paper "Why AM and Eurisko Appear to Work" for AAAI-83.

smythe@iuvax.cs.indiana.edu (10/20/87)

> /* Written  5:09 pm  Oct 17, 1987 by eric@snark in iuvax:comp.ai */
> 
> ...
> 
> Doug Lenat's Amateur Mathematician program was a theorem prover equipped with
> a bunch of heuristics about what is 'mathematically interesting', essentially
> methods for grinding out interesting generalizations and combinations of known
> theorems. Lenat fed it the Zermelo-Frankel set theory axioms and let it run.
> 
> After n hours of chugging through a lot of nontrivial but already-known
> mathematics, it 'conjectured' and then proved a bunch of new results on the
> number-theoretic properties of Pythagorean triples (3-tuples of integers of 
> the form <x, y, sqrt(x**2 + y**2)>).
> 
> I was a theoretical mathematician at the time I saw the AM paper. It was
> *fascinating*. The program could probably have done a lot more, but it
> eventually choked on the size of its own LISP data structures.
> 
> So at least one of your negative assertions is incorrect.

The reason that AM choked was not so much that it got bogged down in
its data structures, but that its ``discovery heuristics'' kept it
from discovering anything ``interesting'' (by its own measure) after
a while.  It simply started thrashing without making much progress.
EURISKO was an attempt to remedy that by discovering or refining its 
own heuristics.  Read Lenat's paper, ``The Nature of Heuristics'' for
his own explanation.

Erich Smythe
Indiana University
smythe@iuvax.cs.indiana.edu

tsmith@gryphon.CTS.COM (Tim Smith) (10/21/87)

In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
+=====
| Given a sufficiently powerful computer, I could, in theory, simulate
| the human body and brain to any desired degree of accuracy.  This
| gedanken-experiment is the one which put the lie to the biological
| anti-functionalists, as, if I can simulate the body in a computer, the
| computer is a sufficiently powerful model of computation to model the
| mind.  I know, for example, that serial computers are inherently as
| powerful computationally as parallel computers, though not as
| efficient, as I can simulate parallel processing on essentially serial
| machines.  So we see, that if the assumption that the mind is an
| inherent property of the body is accepted, we must also accept that a
| computer can have a mind, if only by the inefficient expedient of
| simulating a body containing a mind.
|        -Sean-
+=====
My claim is, specifically, that you cannot simulate a human
being (body and mind) with a digital computer, either in theory
or practice.  Not a few people with whom I am in basic agreement
would claim that, well, it just *might* be conceivable in
theory, but you could never do it in practice.

I'ts not clear what is meant by "in theory" here. It sounds like
an unacceptable hedge. You might, for example, claim that with a
very large number of computers, all just at the edge of the
speed boundaries dictated by the laws of physics in the most
advanced materials imaginable, you could simulate a human body
and mind--but not in real time. But the simulation would have to
be in real time, because humans live in real time, doing things
that are critically time dependent (perceiving speech, for
example).

Similarly, humans think the way they do partially because of
their size, because of the environment they live in, because of
the speed at which they move, live, and think.

One of the consistent failings of AI researchers is to vastly
underestimate the intricacy and complexity of the kinds of
things they are trying to model (of course most of the other
cognitive scientists in this century have also underestimated
these things). Playing chess is nothing compared with natural
language understanding. We take language understanding for
granted, because, after all, we all do it.  Yet we consider a
chess grand master brilliant, because we cannot match his
skills. But in fact, becoming a chess grand master is not more
difficult than learning to speak and write English. It's easier.
We learn language because we start early, spend *lots* and
*lots* of time doing it, and it's fun (watch children playing
word games sometime).  We recognize that it's learn to speak or
perish, in a sense. Many fewer people are motivated (at the
early age required) to learn to play chess at the GM level.

The trouble with the kind of naive (if you'll pardon the
expression) reductionism inherent in your position is that it
seems to assume that any set of physical interactions that can
be expressed mathematically can be scaled up to a full-scale
simulation, and that this simulation would be indistinguishable
from the original thing.

Leaving aside AI for a moment, consider weather simulations.
Metereologists have developed computerized simulations of
phenomena such as hurricanes.  Based on lots of data from past
storms, they can predict, with some accuracy, how a developing
storm might behave. This is obviously an extremely useful
capability. But to claim that a computer simulation of a
hurricane is exactly the same as the real thing would probably
sound like a very poor joke to someone who has experienced a
hurricane first-hand.

It seems to me that any intelligent person would say "how could
you ever truly simulate a hurricane, and why would you want to?"
Well, I have the same reaction to those who say that they want
to simulate human intelligence, or even some essential part of
it such as natural language understanding. How, and for God's
sake, *why*?  To study human intelligence, using computers and
any other tools available, is a fascinating thing to do. I have
spent a number of years doing so. But to say that we are
approaching an era when human intelligence will be simulated
seems to be just about like saying that from the puff of air
generated by the wave of a hand it is only a few short steps to
a full-scale realistic simulation of a hurricane.

Know what it is you are trying to simulate!


-- 
Tim Smith
INTERNET:     tsmith@gryphon.CTS.COM
UUCP:         {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP:         {philabs, trwrb}!cadovax!gryphon!tsmith

tsmith@gryphon.CTS.COM (Tim Smith) (10/21/87)

In article <228@snark.UUCP> eric@snark.UUCP (Eric S. Raymond) writes:
+====
| In article <1922@gryphon.CTS.COM>, tsmith@gryphon.CTS.COM (Tim Smith) writes:
| > Computers do not process natural language very well, they cannot
| > translate between languages with acceptable accuracy, they
| > cannot prove significant, original mathematics theorems.
| 
| I am in strong agreement with nearly everything else you say in this article,
| especially your emphasis on a need for a new paradigm of mind. But you are,
| I think, a little too dismissive of some real accomplishments of AI in at
| least one of these difficult areas.
| 
| Doug Lenat's Amateur Mathematician program was a theorem prover equipped with
| a bunch of heuristics about what is 'mathematically interesting', essentially
| methods for grinding out interesting generalizations and combinations of known
| theorems.
| [...]
| 
| So at least one of your negative assertions is incorrect.
+=====
OK, I'll accept your word on this (I'm a linguist, not a
mathematician).

+=====
| I think AI has the same negative-definition problem that "natural
| philosophy" did when experimental science got off the ground -- that
| once people get a handle on some "AI" problem (like, say, playing
| master-level chess or automated proof of theorems) there's a tendency
| to say "oh, now we understand that; it's *just* computation, it's not
| really AI" and write it out of the field (it would be interesting to
| explore the hidden vitalist premises behind such thinking).
+=====
Well, scientific (and philosophical) fields do progress, and there is
a normal tendency to discard the old and no longer interesting. But
there is an interesting aspect to what you are saying, I believe. Let
me try to develop it a bit, using chess as an example.

Chess: I am at a disadvantage here in one sense--I don't play the
game very well. In my limited understanding of it, it is a very
difficult game to play at a high level. It requires years of study,
usually starting at a young age, to become a grand master. It
requires peculiar abilities of concentration and nervous resources to
play chess at a competetive level. Nevertheless, I don't think of
chess as being a particularly intellectual game. It seems much more like
tennis to me (and I don't play that either). This is not a put-down!
I think of chess as being a sedentary sport--a sport for the mind.

Now here's the interesting point. If you were to come to me and say--
"Smith, you have a year to develop an automaton that will play some
kind of major sport at a championship level, competing against humans.
Money is no object, and you can have access to all the world's
experts in AI and robotics, but you must design a robot that plays
championship X in a year's time.  What is X?" I would say, without a
moment's hesistation, "tennis".

Why? Of all the sports, tennis is the most bounded. It is played within
a very restricted area (unlike golf or even baseball), it is a
one-against-one sport (unlike football or soccer), the playing surfaces
(aside from Wimbledon) are the truest of all the major sports, and it
is indubitably the most boring of all the sports to watch (if not to
play). A perfect candidate for automation.

Chess? It is tennis for the mind. And so a perfect candidate for
initial attempts at AI. But if computers have conquered chess (as
they seem about to), does this mean that "real" artificial
intelligence is not far behind? No, it just means that chess was
over-rated as an intellectual exercise! On a scale of 1 to 10, in
terms of intellectual effort involved in playing the game, chess
seems to rate at about .002.  In terms of skill, concentration
ability, depth of understanding of the game, etc. it is difficult.
But then, so is multiplying two 37 digit numbers in your head
difficult. Unless you're an "idiot savant", or a computer!
-- 
Tim Smith
INTERNET:     tsmith@gryphon.CTS.COM
UUCP:         {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP:         {philabs, trwrb}!cadovax!gryphon!tsmith

lishka@uwslh.UUCP (Christopher Lishka) (10/22/87)

In article <224@bernina.UUCP> srp@bernina.UUCP (Scott Presnell) writes:
>In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>
>>
>>Given a sufficiently powerful computer, I could, in theory, simulate
>>the human body and brain to any desired degree of accuracy.  This
>
>Horse shit.  The problem is you don't even know exactly what you are
>simulating!

Good point (although I may not have phrased it so strongly)!  I would
like to see some sort of proof that one could, "in theory", simulate
the human body and brain to any desired degree of accuracy.  Hell, as
a student of A.I. who is taking a Neurobiology course, it seems to me
humans know very little about the workings of the brain, let alone
other areas of biology where there are many unanswered questions about
how things work or why certain processes go on.  How can one simulate
something that is not fully (or even largely) understood?  Especially
something as unpredictable and incredibly complex as the human body?
I would like to see a proof...

>Let's get down to a specific case:
>I propose that given any amount of computing power, you could not presently,
>and probably will never be able to simulate me: Scott R. Presnell.
>My wife can be the judge.  

Good test!  However, to be fair, Mr. Engleson seemed to indicate that
"a" human body (read: NOT a specific human body or person), not Mr.
Presnell's body.  But I agree with Mr. Presnell; my beloved would
notice a difference too (at least I would hope ;-).

>This may sound reactionary, that's because that's the way I responded
>internally to this first statement.  I apologize if I've jumped into a
>discussion too quickly, I don't have time to read the previous discussions
>right now.  

I was going to write a flame immediately when I saw Mr. Engleson's
statement, but I thought I should wait.  If Mr. Presnell's followup is
out of line, than it is just as out of line as Mr. Engleson's statement.

Disclaimer: the above my thoughts and no one else's, except (maybe)
those of my cockatiels!

>Scott Presnell						Organic Chemistry
>Swiss Federal Institute of Technology  (ETH-Zentrum)
>CH-8092 Zurich, Switzerland.
>uucp:seismo!mcvax!cernvax!ethz!srp (srp@ethz.uucp); bitnet:Benner@CZHETH5A


-- 
Chris Lishka                    /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
"What, me, serious? Get real!"  \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

scott@swatsun (Jay Scott) (10/22/87)

> Nevertheless, I think my more general observations about AI's definitional
> problem remain valid. Compilers are a 'success' of AI. So are heuristic-based
> search-and-backtrack algorithms. So is the visual analysis preprocessing used
> in seeing pick-and-place robots. So (most recently) are 'expert systems'.
> In *each case*, these problem areas were defined out of the AI field as soon
> as they spawned halfway-usable technologies and acquired their own research
> communities.

I agree.  And I want to understand better why it's so.

Here's one speculation:  People see intelligence as mysterious, intrinsically
non-understandable.  So anything understood can't be part of intelligence,
and can't be part of AI.  I assume this was what Eric had in mind in a
previous article, when he mentioned "hidden vitalist premises".

Of course some people believe explicitly that intelligence is mystical,
and say so.  But even AI people may implicitly feel that, oh, this algorithm
isn't clever enough, real intelligence has to be cleverer than that.  And
so it goes.

Any other good ideas?

>      Eric S. Raymond
>      UUCP:  {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax,vu-vlsi}!snark!eric
>      Post:  22 South Warren Avenue, Malvern, PA 19355    Phone: (215)-296-5718

				Jay Scott
			  ...bpa!swatsun!scott

spe@SPICE.CS.CMU.EDU (Sean Engelson) (10/22/87)

Keywords:



A couple of clarifications in response to recent posts:

(a) My name is Engelson---NOT Engleson.

(b) I did not state that we could simulate the human body and brain at
this point in time.  However, we could at some point, presumably, get
to the point where we know precisely how the body is constructed, and
construct a simulation of the physical processes that occur.  This is
reasonable because the human body is finite in extent, and thus there
is a finite amount of information to discover, thus it can be
discovered in finite (although possibly very large) time.  This is why
I say that computers are not a less-powerful model of computation than
the human brain, as the one can simulate the other.  By 'as powerful'
I mean that the same computations may be performed by both; in the
same sense that a serial computer is as powerful as a parallel one, as
the one can simulate the other, although with a great loss of efficiency.

(c) No, it would not be neccesary to simulate the physical world in
our hypothetical super-computer.  We could simulate the actions of the
sensory inputs by filtering such things as movie-camera output,
tactile sensors, etc., through a simulation of human sensory organs.
We know that that is theoretically possible through the same line of
reasoning as above.

	-Sean-
--
Sean Philip Engelson			I have no opinions.  
Carnegie-Mellon University		Therefore my employer is mine.
Computer Science Department	
----------------------------------------------------------------------
ARPA: spe@spice.cs.cmu.edu
UUCP: {harvard | seismo | ucbvax}!spice.cs.cmu.edu!spe

varol@cwi.nl (Varol Akman) (10/23/87)

In article <213@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>
>                         ....................
>
>discovered in finite (although possibly very large) time.  This is why
>I say that computers are not a less-powerful model of computation than
>the human brain, as the one can simulate the other.  By 'as powerful'
>                 ---------------------------------

Congratulations, when are you going to receive your Nobel prize
for discovering that?

Varol Akman, CWI, Amsterdam

What is an individual?  A very good question.  So good, in fact, that
we should not try to answer it.                          - DANA SCOTT

gilbert@hci.hw.ac.uk (Gilbert Cockton) (10/23/87)

In article <1922@gryphon.CTS.COM> tsmith@gryphon.CTS.COM (Tim Smith) writes:
(the best posting on this issue I've seen)

>It wasn't until computers came along that there was a
>metaphor for the brain powerful enough to be taken seriously.

Hence the circularity in much AI appeal to cognitive psychology.
As the latter is now riddled with information processing concepts, the
impulsive observer will be quick to conclude from cog. psy. research
that cognition works like a computer. Wrong conclusion - many cognitive
psychologists talk about mind *as if it were* a computer. Likeness,
especially presumed likeness, is not the same as essence, assuming
noumenal objects exist of course.

>There is no reason, in principle, that a very powerful
>digital computer cannot imitate a mind

Apologies for picking up on this, given the writer's (deleted)
qualification and probable sarcasm about arguments of this form.  This
may appear perverse, but what on earth are these arguments of the form
"nothing in principle prevents"? They are used much by the "pure" AI
misanthropes, but I can never find any substance in such arguments.
Which principles? How can we argue from these principles to
possibility/impossibility. After all, is there anything of any genuine
interest to non-logicians which is logically impossible, rather than
semantically contradictory (a married bachelor for example)?

Again, I pick this up because AI zealots reach for this argument all
the time, and it isn't an argument at all.

(PS - no flames on "misanthrope" or "zealot", one can be studying an
AI topic without losing one's humanism or one's sense of moderation.
I am only characterising those who are misanthropic zealots, a specialisation
and not a generalisation.)

>The success rate in AI research (as well as most of cognitive
>science) in the past 20 years is not very encouraging.

Despite all that taxpayers' money :-)

> A better concept of "mind" is what is needed now.

Well said. "Better" concepts related to mind than those found in cog. sci.
already exist. The starting point is the elaboration of the observable human
phenomena which we are attempting to unify within a study of mind. These
phenomena have been studied since the dawn of time. There are many
monumental works of schlarship which unify the phenomena grouped into
well-defined subfields. The only problem for AI workers surveying all
these masterpieces is that none of the authors are committed to
computational models. Indeed, they would no doubt laugh at anyone who
suggested that their work could be reduced to a Turing Machine compatible
notation.

> This is not to say that AI research should halt

But AI research could at least be disciplined to study the existing work
on the phenomena they seek to study. Exploratory, anarchic,
uninformed, self-indulgent research at public expense could be stopped.
(and not just in AI, although I've never seen such a lack of
discipline and scholarship anywhere else outside of popular history
and futorology, neither of which attract public funds).

> or that computers are not useful in studying human
> intelligence. (They are indispensable.)

Yes (no). They have proved useful in many areas of study. They have
never been used at all in others, beacuse they have not been able to
offer anything worthy of attention.

> For one example of this new way of thinking, see the recent book by the
> linguist George Lakoff, entitled "Women, Fire, and Dangerous Things."

Does he use computers?

>I believe the great success of AI has been in showing that
>the old dualistic separation of mind and body is totally
>inadequate to serve as a basis for an understanding of human intelligence.

How can you attribute the end of dualism to AI research. This is a
historical statement which should be backed up by references to
specific pieces of work in AI. I doubt that anything emerging from AI
(rather than the disciplines of Cognitive Science)
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

eric@snark.UUCP (Eric S. Raymond) (10/23/87)

In article <1342@tulum.swatsun.UUCP>, scott@swatsun (Jay Scott) writes:
>[quoting me:]
>> In *each case*, these problem areas were defined out of the AI field as soon
>> as they spawned halfway-usable technologies and acquired their own research
>> communities.
> 
> Here's one speculation:  People see intelligence as mysterious, intrinsically
> non-understandable.  So anything understood can't be part of intelligence,
> and can't be part of AI.  I assume this was what Eric had in mind in a
> previous article, when he mentioned "hidden vitalist premises".

Yes, that is precisely what I intended.

> Any other good ideas?

Maybe :-). A friend once told me that she'd read that human institutions reach
a critical size at 250 people; that that is the largest social unit for which
a single member can keep a reasonable grasp on the capabilities and style of
everyone else in the group. This insight explains the allegedly remarkably
consistent size of pre-industrial villages in areas where enough settlement
land is available so that people can move elsewhere when they want.

There is supposedly one well-known company that has found that the productivity
gains from holding their work units down to this size more than justify the
diseconomies of scale from small plants.

This idea gets some confirmation from my experience of SF fandom, a totally
voluntarist subculture that has, historically, thrown off sub-communities
like yeast buds (SCA, Trek fandom, the Darkovans, the Dr. Who people, etc.
etc.). We even have a name for these 'buds'; they're called "fringe fandoms"
and the people in them "fringefen" (the correct plural of "SF fan" is, by
ancient tradition "SF fen").

In this context, the theory needs a little generalizing; what seems
to count for that magic 250 is not the number of self-described "Xites", but
rather the smaller number of *organizers* and *regulars*; the people that
maintain the subculture's communications networks and set its style.

Now: let's assume a parallel division in science between "stars" (the people
who do, or are seen to be doing, the important work) and "spear carriers"
(the people who fill in the corners, tie down the details, go after the
last decimal places, and get most of the grants ;-)). We then have:

RAYMOND'S HYPOTHESIS:
	A scientific field with more than 250 "stars" will tend to fragment
	into subspecialties more and more strongly as the size increases.

It would be interesting to look at other classes of voluntarist subcultures
(like, say, fringe political parties) to see if a similar pattern holds.

-- 
      Eric S. Raymond
      UUCP:  {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax,vu-vlsi}!snark!eric
      Post:  22 South Warren Avenue, Malvern, PA 19355    Phone: (215)-296-5718

gilbert@hci.hw.ac.uk (Gilbert Cockton) (10/23/87)

In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>Given a sufficiently powerful computer, I could, in theory, simulate
>the human body and brain to any desired degree of accuracy.  This
>gedanken-experiment

keinen gedanken mein Herr!

In **WHICH THEORY**? Cut out this use of theoretical to mean "given
arbitrary fantasies".  Theories have real substance, and you are
obliged to elaborate on the theory before alluding to it.

Given a sufficiently powerful computer, could I, in theory, get
everyone on the net to like my postings? Rhetorical of course, so spare
me any abusive replies :-). The point again, is that I would have to
elaborate the theory and test it out to be sure. Furthermore, I could
not expect everyone to be convinced, that in the event of highly
unlikely (impossible I believe) universal acceptance of my postings,
that my theory really was the explanation.  In short, even if one dropped
fantasy for science, people in general are not going to be convinced.

> if I can simulate the body in a computer, the computer is a
> sufficiently powerful model of computation to model the mind.

Of course. Now simulate it. And of course, you won't be slowed down by reading
up on all the unanswered objections to the **belief** that computable formalisms
can model mind. In short, this is no contribution to the argument.

>we must also accept that a computer can have a mind, if only by the
>inefficient expedient of simulating a body containing a mind.

Ahem. Socialisation.

AI people rarely have a handle on this at all. I take it that your
computer simulation of the body is going to go down to the park with
you to see the ducks, go down to playgroup, start primary school and
work through to a degree, mixing all the time with a wide range of
people, reading books, watching TV and visiting interesting places?

Look, people are people because they interact as people with people.
Now, who's going to want to interact with your computer as if it were
a person?

Need I go on?
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

gilbert@hci.hw.ac.uk (Gilbert Cockton) (10/23/87)

>How can you attribute the end of dualism to AI research. This is a
>historical statement which should be backed up by references to
>specific pieces of work in AI. I doubt that anything emerging from AI
>(rather than the disciplines of Cognitive Science)

had anything to do with this supposed metaphysical shift.

Now don't eat that one!
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

lishka@uwslh.UUCP (Christopher Lishka) (10/23/87)

In article <213@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>
>A couple of clarifications in response to recent posts:
>
>(b) I did not state that we could simulate the human body and brain at
>this point in time.  However, we could at some point, presumably, get
>to the point where we know precisely how the body is constructed, and
>construct a simulation of the physical processes that occur.  This is
>reasonable because the human body is finite in extent, and thus there
>is a finite amount of information to discover, thus it can be
>discovered in finite (although possibly very large) time.  This is why
>I say that computers are not a less-powerful model of computation than
>the human brain, as the one can simulate the other.  By 'as powerful'
>I mean that the same computations may be performed by both; in the
>same sense that a serial computer is as powerful as a parallel one, as
>the one can simulate the other, although with a great loss of efficiency.
>

I have some questions of Mr. Engelson (forgive me is I misspelled your
name in my last posting), that others on the net might answer also:

How do we know that a computer and a human are "as powerful" as each
other?  How do we know that the same computations can be performed on
each "entity?"  Referring back to the biological sciences (esp.
Neurobiology), it would seem that there is so much that is *not* known
that coming to conclusions about abstract things such as how a human
body computes (especially billions of computations that we are not
aware of) is a bit naive at this point.  It seems like so many
mistakes that were made in the past about the human body and mind: the
brain as complex plumbing, the brain as a rather large telphone
network, etc.  Can the assumption that the two are equal in their
power to compute really be made based on what humans know (and do not
know) about their own functioning? Just a thought (maybe I am looking
at this the wrong way...).

By the same reasoning as above, is the analogy between serial and
parallel computers (and a computer and human body) really a good one?
The differences between any computer and a human body (based on the
little we do know) is staggering.  In theory, things appear to be the
same.  But computers do not have hormones, neurotransmitters, internal
messengers, complex channels, etc. for each of their "basic"
constituents (which I am assuming are cells).  Now, theoretically they
may not be necessary.  In constructing a model, it is easy to overlook
what can be implemented and what is easy to implement.  But
practically the mechanisms may be necessary.  I don't know.  No one
else knows.  But I do know that my Professor of Neurobiology (whom I
think is a good source) as well as the Grad. Students I have spoken
with *all* warn me to beware of these oversights, because the small
details are what do make the difference.  If these messenger molecules
and different neurotransmitters and sodium/potassium/calcium channels
and electrical vs. chemical channels were totally useless, why have
they survived millions of years of evolution?  Are we then only
super-parallel processors when compared to parallel-processing
computers, just as parallel-processing computers are to serial
computers? 

>(c) No, it would not be neccesary to simulate the physical world in
>our hypothetical super-computer.  We could simulate the actions of the
>sensory inputs by filtering such things as movie-camera output,
>tactile sensors, etc., through a simulation of human sensory organs.
>We know that that is theoretically possible through the same line of
>reasoning as above.

Is this reasonable?  Could we raise a human being properly be hooking
his retinal receptors to wires, his aural receptors to wires, his
tongue connections to a computer simulation, etc.?  Would we get a
*normal* person?  Personally, I don't think so, but then I don't know;
noone knows.  And until someone such as Hitler comes along, the
question will probably remain unanswered.  Now, I feel this applies to
computers because we would, in effect, be doing the same thing (given
that we could artificially create a model of a human in a computer).
You would still need to simulate the real world in the images that you
gave the machine.  The images would need to respond to the machine.
When the machine wanted to move, all of the images and artificial
senses would need to reflect that.  When the machine tried wanted to
ask a question while standing on its head, twiddling it fingers,
chewing gum, and computing pi to the fourth power, could the images
and artificial senses fed to it effectively simulate that?  (I know,
it probably wouldn't have a head or do those things, so just insert
any funny little thing that a "child" computer-modelled human would do
at once.)  Again, no small feat.  Is this really possible in the
future?


>Sean Philip Engelson			I have no opinions.  

Just some thoughts of mine (the above are NOT intended to be flames).
I feel is a very interesting discussion, but in the end hinges on
one's personal beliefs and philosophies (but then, what doesn't ;-)
The usual disclaimer applies (including the bit about the cockatiels).

					-Chris


-- 
Chris Lishka                    /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
"What, me, serious? Get real!"  \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

coray@nucsrl.UUCP (Elizabeth) (10/24/87)

in reponse to:  spe@SPICE.CS.CMU.EDU (Sean Engelson) /  9:21 am  Oct 22, 1987 /

> This is reasonable because the human body is finite in extent, 
> and thus there is a finite amount of information to discover, 
> thus it can be discovered in finite (although possibly very large) time. 

I am planning on gracefully failing my qualifiers in just two weeks, and 
one of the questions I plan to fail will have to do with decidability. 
Because now I know that I will blithely point out that language is finite in
extent and thus there is only a finite amount of information which it
can convey, so why worry about unprovable true theorems?  We'll just
prove all the true ones (in possibly very large finite time?) and then
see if the theorem of interest is in this finite set.  

Grade -2.

alan@pdn.UUCP (Alan Lovejoy) (10/24/87)

In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
/Given a sufficiently powerful computer, I could, in theory, simulate
/the human body and brain to any desired degree of accuracy...
/...if I can simulate the body in a computer, the
/computer is a sufficiently powerful model of computation to model the
/human mind...

The ultimate in "machine emulation"!!!!

Why does this remind me of Chomsky's concept of 'weak' and 'strong' 
equivalence between grammars?  Hmmm...

--alan@pdn

alan@pdn.UUCP (Alan Lovejoy) (10/24/87)

In article <224@bernina.UUCP> srp@bernina.UUCP (Scott Presnell) writes:
/In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
/>Given a sufficiently powerful computer, I could, in theory, simulate
/>the human body and brain to any desired degree of accuracy.  
/
/Horse shit.  The problem is you don't even know exactly what you are
/simulating! ...  
/For instance, dreams, are they logical?, do they fall in a pattern?, a computer
/has got to have them to be a real simulation of a body/mind, but you cannot
/simulate what you cannot accurately describe.

Simulated horse shit!  I can write a simulator for the IBM-PC to run on
a Macintosh-II, without knowing or understanding all the IBM-PC programs
that will ever run on it.  The same is in principle possible when the
machine being emulated is a human body.

/Let's get down to a specific case:
/I propose that given any amount of computing power, you could not presently,
/and probably will never be able to simulate me: Scott R. Presnell.
/My wife can be the judge.  

Which wife? The one being simulated by the computer as part of the
simulated environment in which you are being simulated?  How would you
or she know which "world" you belonged to?

--alan@pdn

alan@pdn.UUCP (Alan Lovejoy) (10/25/87)

In article <1993@gryphon.CTS.COM> tsmith@gryphon.CTS.COM (Tim Smith) writes:
/In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
/+=====
/| Given a sufficiently powerful computer, I could, in theory, simulate
/| the human body and brain to any desired degree of accuracy.  This

/You might, for example, claim that with a
/very large number of computers, all just at the edge of the
/speed boundaries dictated by the laws of physics in the most
/advanced materials imaginable, you could simulate a human body
/and mind--but not in real time. But the simulation would have to
/be in real time, because humans live in real time, doing things
/that are critically time dependent (perceiving speech, for
/example).

You make the invalid assumption that "simulation" means that those of
us in the real universe can not distinguish the simulated object or 
process from the real thing.  It is just as valid to deal with
simulations that enable one to make accurate predictions about what
would happen in the real world in some well-specified scenario, even
if the simulation doesn't look anything like what is simulates in the
physical sense.  What matters is the logical equivalence or similarity
in an abstract reality.

/Similarly, humans think the way they do partially because of
/their size, because of the environment they live in, because of
/the speed at which they move, live, and think.

If the environment of an object is simulated in addition to the object
itself, one need merely synchronize the object with the simulated
environment as to speed, size, etc.

--alan@pdn

goldfain@osiris.cso.uiuc.edu (10/26/87)

> tsmith@gryphon.CTS.COM writes
> Now here's the interesting point. If you were to come to me and say--
> "Smith, you have a year to develop an automaton that will play some
> kind of major sport at a championship level, competing against humans.
> Money is no object, and you can have access to all the world's
> experts in AI and robotics, but you must design a robot that plays
> championship X in a year's time.  What is X?" I would say, without a
> moment's hesistation, "tennis".
>
> Why? Of all the sports, tennis is the most bounded. It is played within
> a very restricted area (unlike golf or even baseball), it is a
> one-against-one sport (unlike football or soccer), the playing surfaces
> (aside from Wimbledon) are the truest of all the major sports, and it
> is indubitably the most boring of all the sports to watch (if not to
> play). A perfect candidate for automation.
> ----------------

Hmmm, by  your own criterion,  I would prefer  table tennis, or   to make life
really easy, bowling.  I had heard that a  table-tennis playing robot has been
developed that is really quite good.  Bowling is really way too simple.
 
(If  what I have  heard is correct,  othello would  also   be  a good choice -
computers  have already been  claimed by some  to  outperform humans here, but
it's not a major sport.)

tony_mak_makonnen@cup.portal.com (10/26/87)

this is exemplary of what happens when many perspectives enter the
picture and words flow . I submit the following :
  It was Von Neuman himself ( I believe) who said that anything
that can be calculated precisely i.e. mathematically can be done
better by a computer .  ( I think this should pass even by the
most rabid hater of computers )

I note that man who is getting lambasted used the words computed
and computational.  I should think he would agree that if one began
to talk of reflection , intuition and so on , the conversation
would be totally different . Else are we to think that with great enough
and intensive computation the machine will eventually exhibit awareness
of itself as something that is .?!

todd@net1.ucsd.edu (Todd Goodman) (10/26/87)

In article <131@glenlivet.hci.hw.ac.uk> gilbert@hci.hw.ac.uk (Gilbert Cockton) writes:
>"Better" concepts related to mind than those found in cog. sci.
>already exist. The starting point is the elaboration of the observable human
>phenomena which we are attempting to unify within a study of mind. These
>phenomena have been studied since the dawn of time. There are many
>monumental works of schlarship which unify the phenomena grouped into
>well-defined subfields. The only problem for AI workers surveying all
>these masterpieces is that none of the authors are committed to
>computational models. Indeed, they would no doubt laugh at anyone who
>suggested that their work could be reduced to a Turing Machine compatible
>notation.

Please, please, please give us a bibliography of these works.  In fact a
short summary would be great, along with the reasons that you find them to be 
better than any current models.  Also if you could point out which are at odds
with each and which you feel are "better" than others, then I would be greatly
appreciative.

This isn't a flame about your response to the earlier posting.  I just want to
take a look at the monumental works you're talking about.

				Todd Goodman
				todd@net1.ucsd.edu
				...!{ucbvax|ihnp4}!sdcsvax!net1!todd

merlyn@starfire.UUCP (Brian Westley) (10/26/87)

In one article...
> But AI research could at least be disciplined to study the existing work
> on the phenomena they seek to study. Exploratory, anarchic,
> uninformed, self-indulgent research at public expense could be stopped.

and, in another article...
>..Thus, I am not avoiding hard work; I am avoiding
>*fruitless* work...
> -- 
>    Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN

Tell me, how do you know WHICH AI methods WILL BE fruitless?  You certainly
must know, for you to call it anarchic, uninformed, and self-indulgent (but why
'exploratory' is used as a put-down, I'll never know - I guess Gilbert already
knows how to build thinking machines, and just won't tell us).

Research is like advertising - most of the money spent is fruitless, but
you won't KNOW that until after you've TRIED it.  (Of course it isn't
entirely wasted; you now know what doesn't work).

Fortunately, you have not convince me nor many other people that your
view is to be held paramount, and all other avenues of work are doomed
to failure.

By the way, I am not interested in duplicating or otherwise developing
models of how humans think; I am interested in building machines that
think.  You may as well tell a submarine designer how difficult it is
to build artificial gills - it's irrelevant.

---
Merlyn LeRoy
"Anything a computer can do is immediately removed from those activities
that require thinking, such as calculations, chess, and medical diagnoses."

josh@topaz.rutgers.edu (J Storrs Hall) (10/27/87)

> tsmith@gryphon.CTS.COM writes
> Now here's the interesting point. If you were to come to me and say--
> "Smith, you have a year to develop an automaton that will play some
> kind of major sport at a championship level, competing against humans.
> Money is no object, and you can have access to all the world's
> experts in AI and robotics, but you must design a robot that plays
> championship X in a year's time.  What is X?" I would say, without a
> moment's hesistation, "tennis".

Goldfain says bowling, which is a very good choice, being in a
completely artificial environment.  It might have (with ping-pong)
the problem of not "really being a sport".  If we define "major sport"
as something done outside in real time against competition and often
televised on major networks, I would have to go with the 50 yard dash.
If we allow any olympic event, offhand sharpshooting looks promising,
javelin throwing looks easy, shot put looks trivial.

In fact, the more I think about it, tennis is probably one of the 
*hardest* sports to implement.  I imagine a team of football-playing
robots:  they look something like tanks...

The point in all this is obviously that in the history of replacing
human effort with mechanical effort, brute force was the first success
story. 
	*	*	*	*

"The Yankees pitcher steps to the mound.  It is a Cincinnati Milacron
G97A22013 just brought up from the minors.  Here's the pitch!  Holy
cow!  A 957 mph fastball on the inside corner for strike one! ..."

--JoSH

goldfain@osiris.cso.uiuc.edu.UUCP (10/29/87)

Who says that ping-pong, or table tennis isn't a sport?  Ever been to China?

gilbert@hci.UUCP (10/30/87)

In article <4171@sdcsvax.UCSD.EDU> todd@net1.UUCP (Todd Goodman) writes:
>>"Better" concepts related to mind than those found in cog. sci.
>>already exist. There are many monumental works of scholarship which unify
>> the phenomena grouped into well-defined subfields.
>
>Please, please, please give us a bibliography of these works. 

Impossible at short notice. Obvious examples are Lyons' work on
semantics (1977?, 2 vols, Cambridge University Press). My answer to
anyone in AI about relevant scholarship is go and see your local
experts for a reading list and an orientation.

By "concepts related to mind", I intend all work concerned with
language, thought and action. That is, I mean an awful lot of work. My
first degree is in Education, which coupled with my earlier work in
History (especially social and intellectual history), brought me into
contact with a wide range of disciplines, and forced me to use each to
the satisfaction of those supervising me. However, I am now probably
out of date, as I've spent the last four years working in
Human-Computer Interaction.

Any work in linguistics under the heading of 'Semantics' should be of
great interest to people working in Knowledge Representation. There is
a substantial body of philosophical work under the heading of
"Philosophy of Mind". Unlike Cognitive Psychology (especially memory
and problem solving), this work has not become fixated on information
processing models. Anthropolgists are doing very interesting work on
category systems; the work of the "New" or "Cognitive" archaeologists
at Cambridge University (nearly all published by Cambridge University
Press) is drawing on much recent continental work on social action.
Any anthropologist should be able to direct you to the older work on
such cultures as the Subanum and the Trobriand Islanders - most of this
work was done by Americans and is more accessible, as it does not
require acquaintance with recent Structuralist and post-Structuralist
concepts, which can be very dense and esoteric.

>the reasons that you find them to be better than any current models.

This work is inherently superior to most work in AI because non of the
writers are encumbered by the need to produce computational models.
They are thus free to draw on richer theoretical orientations which
draw on concepts which are clearly motivated by everyday observations
of human activity. The work therefore results in images of man which
are far more humanist than mechanical computational models. Workers in
AI may be scornful of such values, but in reality they should realise
that adherents to a mechanistic view of human behaviour are very
isolated and in the minority, both now and throughout history. The
persistence of humanism as the dominant approach to the wider studies
of man, even after years of zealous attack from self-proclaimed
'Scientists', should be taken as a warning against the acceptability of
crude models of human behaviour. Furthermore, the common test of any
concept of mind is "can you really imagine your mind working this way?"
Many of the pillars of human societies, like the freedom and dignity of
democracy and moral values, are at odds with the so called 'Scientific'
models of human behaviour; indeed the work of misanthropes like Skinner
actively promote the connection between impoversihed models of man and
immoral totalitarian socities (B.F. Skinner, Beyond Freedom and Dignity).

In short, mechanical concepts of mind and the values of a civilised
society are at odds with each other. It is for this reason that modes
of representation such as the novel, poetry, sculpture and fine art
will continue to dominate the most comprehensive accounts of the human
condition.
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

sramacha@udel.EDU (Satish Ramachandran) (10/30/87)

In article <8300008@osiris.cso.uiuc.edu> goldfain@osiris.cso.uiuc.edu writes:
>
>Who says that ping-pong, or table tennis isn't a sport?  Ever been to China?

Rightly put! Ping-pong may not be a spectator sport in the West and hence,
maybe suspected to be a 'sport' where little skill is involved.
But if you read about it, you would find that the psychological aspect
of the game is far more intense than say, baseball or golf! 
The points are 21 each game and very quickly done with...(often with the
serves themselves !)
Granting the intense psychological factors to be considered while playing
ping-pong (as in many other games), would it be easier to make a machine play
a game where there is a lot of time *real-time* to decide its next move
as opposed to making it play a game where things have to be decided 
more quickly, relatively?
Satish

P.S. Btw, ping-pong is also a popular sport in Japan, India, England, 
Sweden and France.

yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi) (11/03/87)

In article <137@glenlivet.hci.hw.ac.uk>, gilbert@hci.hw.ac.uk (Gilbert Cockton) writes:
> This work is inherently superior to most work in AI because non of the
> writers are encumbered by the need to produce computational models.
> They are thus free to draw on richer theoretical orientations which
> draw on concepts which are clearly motivated by everyday observations
> of human activity. The work therefore results in images of man which
> are far more humanist than mechanical computational models.

I think most AI researchers would agree that the human mind is more than a
simple production system or back-propagation network, but the more basic
question is whether or not it is possible for human beings to understand
human intelligence.  If the answer is no, then not only cognitive
psychologists, but all psychologists will be doomed to failure.  If the
answer is yes, then it should be possible to use build a system that uses
that knowledge to implement human-like intelligence.  The architecture of this
system may be totally unlike today's computers, but it would be man-made
("Artificial") and possessing human-like intelligence.

This may require some completely different model than those currently
popular in cognitive science, and it would have to account for
"non-computational" human behavior (emotions, creativity, etc.), but as long
as it was well-defined, it should be possible to implement the model in some
system.

I suppose one could argue that it will never be possible to perfectly
understand human behavior, so it will never be possible to make an AI which
perfectly duplicates human intelligence.  But even if this were true, it
would be possible to duplicate human intelligence to the degree that it was
possible to understand human behavior.

> Furthermore, the common test of any
> concept of mind is "can you really imagine your mind working this way?"

This is a generally useful, if not always accurate, rule of thumb.  (It is
also the reason why I can't see why anyone took Freudian psychology
seriously.) 

Information-processing models (symbol-processing for the higher levels,
connectionist for the lower levels) seem more plausible to me than any
alternatives, but they certainly are not complete and to the best of my
knowledge, they do not attempt to model the non-computational areas.  It
would be interesting to see the principles of cognitive science applied to
areas such as personality and creativity.  At least, it would be interesting
to see a new perspective on areas usually left to non-cognitive
psychologists.

> Many of the pillars of human societies, like the freedom and dignity of
> democracy and moral values, are at odds with the so called 'Scientific'
> models of human behaviour; indeed the work of misanthropes like Skinner
> actively promote the connection between impoversihed models of man and
> immoral totalitarian socities (B.F. Skinner, Beyond Freedom and Dignity).

True, it is possible to promote totalitarianism based on behaviorist
psychology (i.e. Skinner) or mechanistic sociology (i.e. Marx), both of
which discard the importance of the individual.  On the other hand, simply
understanding human intelligence does not reduce its importance -- an
intelligence that understands itself is at least as valuable as one that
does not.

Furthermore, totalitarian and collectivist states are often promoted on the
basis of so-called "humanistic" rationales -- especially for socialist and
communist states (right-wing dictatorships seem to prefer nationalistic
rationales).  The fact that such offensive regimes use these justifications
does not discredit either science or the humanities.
______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

lee@uhccux.UUCP (Greg Lee) (11/03/87)

In article <1641@pdn.UUCP> alan@pdn.UUCP (0000-Alan Lovejoy) writes:
>In article <224@bernina.UUCP> srp@bernina.UUCP (Scott Presnell) writes:
>/In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>/>Given a sufficiently powerful computer, I could, in theory, simulate
>/> ...
>/ ...
>Simulated horse shit!  I can write a simulator for the IBM-PC to run on
>a Macintosh-II, without knowing or understanding all the IBM-PC programs
> ...
Maybe a good analogy.  I once wrote a simulator for CPM-80 inside CPM-86,
and found that much of the effort was in simulating the CPM-80 operating
and io systems, even though the two systems are very similar. How
would you compare our knowledge of the IBM-PC operating system with
our knowledge of the human system?
	Greg Lee, lee@uhccux.uhcc.hawaii.edu

alan@pdn.UUCP (11/04/87)

In article <1056@uhccux.UUCP> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
/I once wrote a simulator for CPM-80 inside CPM-86,
/and found that much of the effort was in simulating the CPM-80 operating
/and io systems, even though the two systems are very similar. How
/would you compare our knowledge of the IBM-PC operating system with
/our knowledge of the human system?

Oh, we hardly know anything by comparison.  But if we *did* know as much
about ourselves as we do about MS-DOS...

--alan@pdn

josh@topaz.rutgers.edu (J Storrs Hall) (11/05/87)

    Brian Yamauchi:
    ... the more basic
    question is whether or not it is possible for human beings to understand
    human intelligence.  If the answer is no, then not only cognitive
    psychologists, but all psychologists will be doomed to failure.  

Actually, it is probably possible to build a system that is more
complex than any one person can really "understand".  This seems to be
true of a lot of the systems (legal, economic, etc) at large in the
world today.  The system is made up of the people each of whom
understands part of it.  It is conjectured by Minsky that the mind is
a similar system.  Thus it may be that AI is possible where psychology
is not (in the same sense that economics is impossible).
--JoSH

spf@moss.ATT.COM (11/05/87)

In article <16240@topaz.rutgers.edu> josh@topaz.rutgers.edu (J Storrs Hall) writes:
}Actually, it is probably possible to build a system that is more
}complex than any one person can really "understand".  This seems to be
}true of a lot of the systems (legal, economic, etc) at large in the
}world today.  The system is made up of the people each of whom
}understands part of it.  It is conjectured by Minsky that the mind is
}a similar system.  Thus it may be that AI is possible where psychology
}is not (in the same sense that economics is impossible).

Your point here makes a lot of sense, and the analogy to economics (as
a complex human-made system that nobody understands) is excellent.
To take it to its logical conclusion, then, we can decide that perhaps
AI CAN model human intelligence, but we won't understand it when it does!!

Actually, I find this the most appealing view of all that have
appeared so far in this discussion.  There are actually many other
examples of human-inventions beyond our total comprehension (e.g.
we're still learning some of the more subtle reasons why airplanes
fly the way they do, even though that didn't slow down the Wright Bros.
And much of software integration testing (yech, do people really DO
that?) is involved with figuring out what a program we wrote DOES.

Yeah, I like the flow of this...

Steve