[comp.ai] What Has Traditional AI Accomplished?

loren@tristan.llnl.gov (Loren Petrich) (10/06/90)

	I know that this question may well start a big flame war, but
I would like an idea of exactly what traditional AI has accomplished.
The impression I get is that the main success of traditional AI is in
designing "expert systems", concerning which one has to laboriously
set up an expert system by specifying what may be a large number of
decision rules for the problem one wants to solve. Judging from what I
have read of expert systems, that can be a very difficult and
time-consuming task. And expert system software still does not seem
exactly accessible.

	There is only one exception that I know of, and that is for
computer algebra systems. There, for the most part, the decision rules
are already known quantities, some having been known for centuries.
And most of them are relatively straightforward and unambiguous, thus
relatively easy to implement on a computer. To use some of my own work
as an example, I myself have used computer algebra systems many times
for problems that sometimes require lengthy algebraic manipulations.
We might count computer algebra as a success for traditional AI.

	Is it fair to say that computer algebra is the only
application of traditional AI that has had any widespread use?

	I am not trying to pick on the AI field, but I really think
that it has not come very far over the decades that it has been in
existence.

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/07/90)

In article <69367@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>
>	I know that this question may well start a big flame war, but
>I would like an idea of exactly what traditional AI has accomplished.
>
  I won't try to speak for all of AI, but I would like to through in
my two cents worth here.  Firstly, an *exact* rundown of what AI has
been up to since its inception would take volumes.  Secondly, I'm not
quite sure what you're refering to with "traditional" AI.  AI denotes
to me the study of problems spanning computer vision, robotics, speech
processing, text understanding, cognitive modelling, pattern recognition
(in some of its incarnations), knowledge engineering (which is where
expert systems fit in), even music generation and analysis.  You may
disagree with the way I've delimited the sub-fields, but the area is
nevertheless more diverse than you seem to suggest.

>	Is it fair to say that computer algebra is the only
>application of traditional AI that has had any widespread use?
>
  Hardly.  AI systems do everything from security (recognition of
fingerprints/voiceprints/retinal patterns), police work (arriving
at composite pictures of people from descriptions, or aging pictures),
to forcasting what chemicals might have certain properties and 
therefore be more worth testing during scarce lab time.  Game playing
programs are in very widespread use, and I think some of those would
fit the description of "traditional AI".  If by 'accomplishment', you
mean only stuff you personally find worthwhile, then evidently AI
has accomplished little.

>	I am not trying to pick on the AI field, but I really think
>that it has not come very far over the decades that it has been in
>existence.
>
  Well, thinking of the start of computational linguistics, which is
the area I'm most familiar with, AI started out as statistical analyses
of text (frequency of use of certain constructions/lexical items) to 
try and resolve authorship disputes.  Today the structural analysis
of text is reasonably sophisticated, allowing the formal study of
such slippery things as style, or even good machine translation which
proved to be a terrible flop when first attempted in the 50's.

  I think that AI has come quite far over its current life.  The reason
that it continues to disappoint some is that as more advances are
made, the number of new problems to be solved only increases, and the
true difficulty of answering old questions is only then truly
appreciated.  AI is by definition (or lack of definition :) an open
area, and so it constantly expands as it's moved into.  If you're
expecting some sudden, radical overhaul in your life, you will likely
be disappointed. :(

--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

loren@tristan.llnl.gov (Loren Petrich) (10/09/90)

In article <1990Oct7.003647.1666@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>In article <69367@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>>
>>	I know that this question may well start a big flame war, but
>>I would like an idea of exactly what traditional AI has accomplished.
>>
>  I won't try to speak for all of AI, but I would like to through in
>my two cents worth here.  Firstly, an *exact* rundown of what AI has
>been up to since its inception would take volumes.  Secondly, I'm not
>quite sure what you're refering to with "traditional" AI...

	I guess I should have been more explicit about it.

	I meant AI with inference rules stated explicitly, rather than
AI with inference rules that are "learned" by the system. Most of
"traditional AI" has not been able to "learn", and perhaps that is my
whole problem with the field -- how to derive whatever inference rules
are necessary.

>>	Is it fair to say that computer algebra is the only
>>application of traditional AI that has had any widespread use?
>>
>	[a whole list of applications, include game programs]

	I concede that certain computer game programs constitute
widespread applications of AI. Some of the other examples do not seem
terribly well-known. I was talking from my experience, which is that
there has been an abundance of research on AI, but a shortage of
(1) relatively accessible formulations and (2) practical applications.
I guess I was thinking of some AI system with (1) performance, (2)
accessibility, and (3) workability outside of an AI-lab environment as
the computer algebra systems that have been developed -- Macsyma,
Mathematica, Maple, etc.

	As a happy user of computer algebra systems, I have not needed
to learn the underlying principles of the operation of
computer-algebra systems, however desirable it may be to do so. I have
also been able to do just about everything with them by myself, though
with the help of some manuals. I have not needed the continual
assistance of the writers of these packages in order to use them. I
have also been able to do a remarkable variety of things with the
computer-algebra packages I have used.

	My basic question was, has there been any other AI application
with that kind of success?

>>	I am not trying to pick on the AI field, but I really think
>>that it has not come very far over the decades that it has been in
>>existence.
>>
>  Well, thinking of the start of computational linguistics, which is
>the area I'm most familiar with, AI started out as statistical analyses
>of text (frequency of use of certain constructions/lexical items) to 
>try and resolve authorship disputes.  Today the structural analysis
>of text is reasonably sophisticated, allowing the formal study of
>such slippery things as style, or even good machine translation which
>proved to be a terrible flop when first attempted in the 50's.

	That's certainly fine, but has there been much outside of the
AI lab? Are there many language-translator programs on the market?

>  I think that AI has come quite far over its current life.  The reason
>that it continues to disappoint some is that as more advances are
>made, the number of new problems to be solved only increases, and the
>true difficulty of answering old questions is only then truly
>appreciated.  AI is by definition (or lack of definition :) an open
>area, and so it constantly expands as it's moved into.  If you're
>expecting some sudden, radical overhaul in your life, you will likely
>be disappointed. :(

	I was not insisting on any such thing. As I commented earlier,
I was wondering if there was any other success comparable to computer
algebra.


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/10/90)

In article <69460@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>In article <1990Oct7.003647.1666@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>>In article <69367@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>>>
>>>	I know that this question may well start a big flame war, but
>>>I would like an idea of exactly what traditional AI has accomplished.
>>>
>>  I won't try to speak for all of AI, but I would like to through in
>>my two cents worth here.  Firstly, an *exact* rundown of what AI has
>>been up to since its inception would take volumes.  Secondly, I'm not
>>quite sure what you're refering to with "traditional" AI...
>
>	I guess I should have been more explicit about it.
>
>	I meant AI with inference rules stated explicitly, rather than
>AI with inference rules that are "learned" by the system. Most of
>"traditional AI" has not been able to "learn", and perhaps that is my
>whole problem with the field -- how to derive whatever inference rules
>are necessary.
>
[stuff deleted...]

>	My basic question was, has there been any other AI application
>with that kind of success?
>
[stuff deleted...]

>	That's certainly fine, but has there been much outside of the
>AI lab? Are there many language-translator programs on the market?
>

  Well, the one that I know of is called METEO, which was developed at
McGill and translates english weather forecasts into french (or was it
the other way around?)  This may not sound like much, but since the
National Weather Service Bureau must perform the translation (and over
a large volume of documents), the fact that about 80% of what METEO
does needs no correction save them alot of time and money.

  I recall summer work, as an undergrad, in the automated truck assembly
plant that GM Canada runs in Oshawa.  The programs that run some of the
robots and welding arms would have likely been considered AI ten years
ago or so.  They also have expert systems for problem diagnosis with
both parts and systems, and are actively working on more.  But that
sort of thing is just not sexy anymore. :>

>	I was not insisting on any such thing. As I commented earlier,
>I was wondering if there was any other success comparable to computer
>algebra.
>
  On the subject of commercial success in AI, I have heard much in the 
past about the Japanese efforts to build a machine translator from
japanese to english and vice versa.  Does anyone know what has become
of this?

>
>$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov
>

PS.  I have never thought of computer algebra as AI before.  Why do
you place it there?  (I'm just asking out of curiousity, not to 
criticize.)  All computer programs manipulate data according to a set
of rules, but I always thought of AI as programming which attempts to
provide functionality comparible to some human cognitive ability(s).
Is algebra a cognitive ability, or is my definition too stringent?


--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/10/90)

In article <69367@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren
Petrich) writes: (I may have the attribution wrong)

>	I know that this question may well start a big flame war, but
>I would like an idea of exactly what traditional AI has accomplished.

Someone at Arthur Anderson Inc told me that half of their
multi-billion dollar income was coming from the expert systems
supplied by them, about a year or so ago.  The ES part of the company
had become as large as the rest of it, and this led to the company's
splitting into two parts.

It is hard to say what AI is, after a few years, because the heuristic
methods get refined.  For example, there are many computer vision
systems working in different places, but no one regards them as AI any
more.  OCR and speech recognizers are today on the edge of being major
industries; they were called AI when the pioneering work was done, but
we merely call them Pattern Recognition today.

Computer algebra was AI when Slagle did his 1961 thesis on
integration.  The methods became more refined and reliabkle by the
time of Moses' thesis in 1966 (I think) and the heuristic search was
almost completely eliminated by the time of MACSYMA, because the
mathematical foundations had been highly developed (by Risch and
Caviness, among others).

Look at any bookshelf of texts on ES's to see industrial applications,
or any shelf of books on computational linguistics, to see what has
become of earlier AI language systems.

The big future, in my view, will come when common sense data bases
(none of which yet exist, expect perhaps for CYC as a prototype) help
the field move from "expert" applications to "commonsense"
applications.  Only then will general-purpose language translation be
feasible.

klb@unislc.uucp (Keith L. Breinholt) (10/10/90)

I know I'm responding to a response but I'd like to add my 2 cents also.

From article <1990Oct7.003647.1666@watdragon.waterloo.edu>, by cpshelley@violet.uwaterloo.ca (cameron shelley):
> In article <69367@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>>
>>	I know that this question may well start a big flame war, but
>>I would like an idea of exactly what traditional AI has accomplished.
>>
>   I won't try to speak for all of AI, but I would like to through in
> my two cents worth here.  Firstly, an *exact* rundown of what AI has
> been up to since its inception would take volumes.  Secondly, I'm not
> quite sure what you're refering to with "traditional" AI.  AI denotes
> to me the study of problems spanning computer vision, robotics, speech
> processing, text understanding, cognitive modelling, pattern recognition
> (in some of its incarnations), knowledge engineering (which is where
> expert systems fit in), even music generation and analysis.  You may
> disagree with the way I've delimited the sub-fields, but the area is
> nevertheless more diverse than you seem to suggest.
> 
>>	Is it fair to say that computer algebra is the only
>>application of traditional AI that has had any widespread use?
>>
>   Hardly.  AI systems do everything from security (recognition of
> fingerprints/voiceprints/retinal patterns), police work (arriving
> at composite pictures of people from descriptions, or aging pictures),
> to forcasting what chemicals might have certain properties and 
> therefore be more worth testing during scarce lab time.

We could add to the list a host of medical instruments such as
cat-scans (sp?) and Ultra-sounds. (Both look at internal organs
without surgery) Then there are tools such as character readers, text
and graphic scanners, and the nifty little barcode readers at the
supermarket.

American Express uses an expert system to check you spending patterns
and okay or nix your purchases (The others may also, AmEx is the only
one that I know for sure.)  Expert Systems are being used to do things
such as find mineral and oil deposits, create pictures in electron
microscopes, find vacines for diseases, and diagnose obscure diseases.

For more everyday things, take a look at the car you drive, it was
assembled in part by robots, as was most of the chips in your
computer, car, microwave, VCR, stereo, furnace, television, telephone,
airplanes, wrist watch, etc.  The software that controls almost
everything in your life was more than likely compiled and written
using tools that use "AI" techniques.  (Most compilers do data-flow
and syntax checking both of which had roots in AI labs.  Any of the
CASE tools used nowdays couldn't exist without some of the fundamental
research done 30 years ago in AI labs.)

>>	I am not trying to pick on the AI field, but I really think
>>that it has not come very far over the decades that it has been in
>>existence.

For a field that hasn't come very far I'd like you to find a part of
your life that AI hasn't touched.

I think your problem is in recognizing the fruits of research done 10
to 20 years ago in "everyday" applications.

Keith L. Breinholt
Unisys, Unix Systems Group

mohammad@hpclmp.HP.COM (Mohammad Pourheidari) (10/10/90)

>/ hpclmp:comp.ai / minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) /
>3:29 pm  Oct  9, 1990 / 

>The big future, in my view, will come when common sense data bases
>(none of which yet exist, expect perhaps for CYC as a prototype) help
>the field move from "expert" applications to "commonsense"

Could you provide more information on CYC?

thanks, M.

maniac@sonny-boy.cs.unlv.edu (Eric J. Schwertfeger) (10/11/90)

In article <3649@media-lab.MEDIA.MIT.EDU>, minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes:
) In article <69367@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren
) Petrich) writes: (I may have the attribution wrong)
) 
) >	I know that this question may well start a big flame war, but
) >I would like an idea of exactly what traditional AI has accomplished.
) 
) Someone at Arthur Anderson Inc told me that half of their
) multi-billion dollar income was coming from the expert systems
) supplied by them, about a year or so ago.  The ES part of the company
) had become as large as the rest of it, and this led to the company's
) splitting into two parts.
) 
) It is hard to say what AI is, after a few years, because the heuristic
) methods get refined.  

	This is exactly why it seems that AI has made so little progress
to some people.  I'm in a 400-level Intro to AI class now, and our 
definition of AI is basically "whatever we haven't figured out how
to do yet."  As soon as AI research refines the methods, the problem
falls out of the AI category.

	Playing chess was originally considered an AI field.  Well, that
research resulted in machines that now play low-Grand-Master level chess.
The problem is no longer considered AI as much, since we've had success.

-- 
Eric J. Schwertfeger, maniac@jimi.cs.unlv.edu

loren@tristan.llnl.gov (Loren Petrich) (10/11/90)

In article <1990Oct9.184502.106@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>In article <69460@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>>	That's certainly fine, but has there been much outside of the
>>AI lab? Are there many language-translator programs on the market?
>>
>
>  Well, the one that I know of is called METEO, which was developed at
>McGill and translates english weather forecasts into french (or was it
>the other way around?)  This may not sound like much, but since the
>National Weather Service Bureau must perform the translation (and over
>a large volume of documents), the fact that about 80% of what METEO
>does needs no correction save them alot of time and money.

	Pretty remarkable. I wonder how much of the work that went
into this natural-language translation program can be generalized to
other translation tasks?

	And would it not be desirable to construct a language
translator that can learn on its own, rather than one that has to be
spoon-fed all the inference rules for its operation?

	I guess I have not heard much of this work because a lot of it
seems to have a VERY low profile. But if there are results worth
pointing to, then please do point to them.

>  I recall summer work, as an undergrad, in the automated truck assembly
>plant that GM Canada runs in Oshawa.  The programs that run some of the
>robots and welding arms would have likely been considered AI ten years
>ago or so.  They also have expert systems for problem diagnosis with
>both parts and systems, and are actively working on more.  But that
>sort of thing is just not sexy anymore. :>

	Again, no mean feat. But would it be a good idea to have a
system that can learn from example?

	And about robotics, I note that most present-day models do not
have much by way of visual feedback for controlling the motions of
their arms. How much progress has there been there?

	And one may want to have some high-level way of specifying the
tasks that they are to perform. One should not need to specify each
little detail of their operation, anymore than we consciously specify
exactly which muscles to contract, and by how much.

>>	I was not insisting on any such thing. As I commented earlier,
>>I was wondering if there was any other success comparable to computer
>>algebra.

>PS.  I have never thought of computer algebra as AI before.  Why do
>you place it there?  (I'm just asking out of curiousity, not to 
>criticize.)  All computer programs manipulate data according to a set
>of rules, but I always thought of AI as programming which attempts to
>provide functionality comparible to some human cognitive ability(s).
>Is algebra a cognitive ability, or is my definition too stringent?

	I think it does qualify as AI -- it is probably no different
from any other type of expert system in that regard. Or are expert
systems not really AI?


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

loren@tristan.llnl.gov (Loren Petrich) (10/11/90)

In article <3649@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>
>
>In article <69367@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren
>Petrich) writes: (I may have the attribution wrong)
>
>>	I know that this question may well start a big flame war, but
>>I would like an idea of exactly what traditional AI has accomplished.
>
>Someone at Arthur Anderson Inc told me that half of their
>multi-billion dollar income was coming from the expert systems
>supplied by them, about a year or so ago.  The ES part of the company
>had become as large as the rest of it, and this led to the company's
>splitting into two parts.

	Pretty remarkable. I have not heard much of it. What are the
capabilities of Arthur Anderson's systems? Do they have anything to
help out in working out inference rules?

	Any successful applications?

>It is hard to say what AI is, after a few years, because the heuristic
>methods get refined.  For example, there are many computer vision
>systems working in different places, but no one regards them as AI any
>more.  OCR and speech recognizers are today on the edge of being major
>industries; ...

	I haven't heard much of that either.

	How much can they accomplish? How well developed are they?

>Look at any bookshelf of texts on ES's to see industrial applications,
>or any shelf of books on computational linguistics, to see what has
>become of earlier AI language systems.

	I'd love to hear about some examples. Any accessible
introductions?

>The big future, in my view, will come when common sense data bases
>(none of which yet exist, expect perhaps for CYC as a prototype) help
>the field move from "expert" applications to "commonsense"
>applications.  Only then will general-purpose language translation be
>feasible.

	I see. I saw an article about CYC some time ago in _Discover_;
it is apparently able to learn. I wonder how "intelligent" does it
seem to those who have worked with it?

	I also get the impression that it is a rather monstrous
system. I guess my ideal would be to have a relatively simple "kernel"
system, which would proceed to build up a large database of
information on whatever it was working on, by employing some learning
algorithm.


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

loren@tristan.llnl.gov (Loren Petrich) (10/11/90)

In article <1990Oct10.140751.11750@unislc.uucp> klb@unislc.uucp (Keith L. Breinholt) writes:
>>>	I am not trying to pick on the AI field, but I really think
>>>that it has not come very far over the decades that it has been in
>>>existence.
>
>For a field that hasn't come very far I'd like you to find a part of
>your life that AI hasn't touched.
>
>I think your problem is in recognizing the fruits of research done 10
>to 20 years ago in "everyday" applications.

	That's part of it.

	Another part of it is that about the only software I have
encountered that uses AI techniques (and I have done a lot of looking
around at available software) are computer-algebra systems, certain
computer games, and compilers.

	There don't seem to be too many readily available expert
system shells, for instance.

	And Neural Nets, what I am working on now, are a field that is
only recently reviving.


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

cowan@marob.masa.com (John Cowan) (10/11/90)

In article <2067@jimi.cs.unlv.edu>,
	maniac@sonny-boy.cs.unlv.edu (Eric J. Schwertfeger) writes:

>Our definition of AI is basically "whatever we haven't figured out how
>to do yet."  As soon as AI research refines the methods, the problem
>falls out of the AI category.

[cites chess as an example]

One of the earliest AI efforts still continuing, "automatic programming",
shows this phenomenon happening over and over.  Automatic programming is
the effort to make human programmers, er, impotent and obsolete.

The first breakthrough in this field produced a program that, indeed, would
(based on general instructions from a human being) generate a program
all by itself.  This software miracle was what we now call "an assembler".

Unfortunately, I don't have a reference for this story.  Does anybody?
-- 
cowan@marob.masa.com			(aka ...!hombre!marob!cowan)
			e'osai ko sarji la lojban

stuck@agnes (Elizabeth Stuck) (10/11/90)

In article <55690002@hpclmp.HP.COM> mohammad@hpclmp.HP.COM (Mohammad
Pourheidari) writes: 
>
>>The big future, in my view, will come when common sense data bases
>>(none of which yet exist, expect perhaps for CYC as a prototype) help
>>the field move from "expert" applications to "commonsense"
>
>Could you provide more information on CYC?
>
>thanks, M.

There is an informative article by Lenat et al. on Cyc in a recent
issue of _Communications of the ACM_ (v. 33, #8, August 1990).  It is
entitled "Cyc: Toward Programs with Common Sense".  Here's the
abstract:

	"Cyc is a bold attempt to assemble a massive knowledge base
	(on the order of 10**8 axioms) spanning human consensus
	knowledge.  This article examines the need for such an
	undertaking and reviews the authors' efforts over the past
	five years to begin its construction.  The methodology and
	history of the project are briefly discussed, followed by a
	more developed treatment of the current state of the
	representation language used (epistemological level),
	techniques for efficient inferencing and default reasoning
	(heuristic level), and the content and organization of the
	knowledge base."


Liz Stuck
Computer Science Department, University of Minnesota
stuck@umn-ai.cs.umn.edu

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/11/90)

In article <69604@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>In article <1990Oct9.184502.106@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>>In article <69460@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>	Pretty remarkable. I wonder how much of the work that went
>into this natural-language translation program can be generalized to
>other translation tasks?
>

  Unfortunatly, I don't think alot is being done with it anymore.  Funding
for the sciences in general has been decreasing for a while...

>	And would it not be desirable to construct a language
>translator that can learn on its own, rather than one that has to be
>spoon-fed all the inference rules for its operation?
>

  Very!  Of course.

>	Again, no mean feat. But would it be a good idea to have a
>system that can learn from example?
>

  Yes again!

>	And one may want to have some high-level way of specifying the
>tasks that they are to perform. One should not need to specify each
>little detail of their operation, anymore than we consciously specify
>exactly which muscles to contract, and by how much.
>
  
  It would be nice, but I think you've left the realm of traditional
AI here.  Perhaps a suitable definition of AI for your purposes is 
one I heard not long ago, "AI is the art of getting real computers
to do what the ones in science fiction do."

--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

rose@ils.nwu.edu (Scott Rose) (10/12/90)

In article <3649@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu
(Marvin Minsky) writes:
>
>Someone at Arthur Anderson Inc told me that half of their
>multi-billion dollar income was coming from the expert systems
>supplied by them, about a year or so ago.  The ES part of the company
>had become as large as the rest of it, and this led to the company's
>splitting into two parts.

These facts are not quite right.  Actually, it was the the systems
consulting  division of
Arthur Andersen which accounted for approximately half of the company's total
income.    Expert systems work probably contributed less than 5% to the
bottom line.  It is, however, one of the fastest growing areas of the
consulting practice.  

In article <69607@lll-winken.LLNL.GOV>,  loren@tristan.llnl.gov (Loren
Petrich) writes 
>
>	Pretty remarkable. I have not heard much of it. What are the
>capabilities of Arthur Anderson's systems? Do they have anything to
>help out in working out inference rules?

Most of the work is built on expert systems workbenches such as those
built by Aion and Inference.

***************************
Scott Rose
rose@rose.ils.nwu.edu
***************************

daryl@oravax.UUCP (Steven Daryl McCullough) (10/12/90)

In article <1990Oct9.184502.106@watdragon.waterloo.edu>, cpshelley@violet.uwaterloo.ca 
(cameron shelley) writes:
>   Well, the one [language translation program]
> that I know of is called METEO, which was developed at
> McGill and translates english weather forecasts into french (or was it
> the other way around?)  This may not sound like much, but since the
> National Weather Service Bureau must perform the translation (and over
> a large volume of documents), the fact that about 80% of what METEO
> does needs no correction save them alot of time and money.

The way I understand it, though, language translation for things like
weather reports is being dropped in favor of language generation. In
language generation, the input is data about the weather (or whatever)
in the form of machine-readable charts and so forth, and the output is
a natural language document. By replacing the grammar and semantic
database, one can generate English, French, Innuit, or whatever. This
is much more successful than language translation, since translation
requires a high degree of understanding of the two languages.

Daryl McCullough

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/12/90)

In article <55690002@hpclmp.HP.COM> mohammad@hpclmp.HP.COM (Mohammad Pourheidari) writes:
>Could you provide more information on CYC?


The CYC project at MCC in Austin Texas is headed by Douglas Lenat. He
just published a book about CYC -- MIT Press.  I don;t have the title handy.

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/12/90)

In article <69607@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>In article <3649@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:

>>The big future, in my view, will come when common sense data bases
>>(none of which yet exist, expect perhaps for CYC as a prototype) help
>>the field move from "expert" applications to "commonsense"
>>applications.  Only then will general-purpose language translation be
>>feasible.
>
>	I see. I saw an article about CYC some time ago in _Discover_;
>it is apparently able to learn. I wonder how "intelligent" does it
>seem to those who have worked with it?
>>	I also get the impression that it is a rather monstrous
>system. I guess my ideal would be to have a relatively simple "kernel"
>system, which would proceed to build up a large database of
>information on whatever it was working on, by employing some learning
>algorithm.

CYC is not intelligent yet.  Its architect, Douglas Lenat, maintains
that his goal is to supply a commonsense database, and *then* work on
making the system be intelligent.  His reason, roughly, is that being
smart requires a lot of common sense knowledge.

Yes, it would be nice if we could do this with a learning program,
instead of having to program it.  Only no one knows how to do this
yet.

The problem with "building a large database of .. whatever it was
working on" is, in my view, that this is why the expert systems have
remained so limited and specialized.  Suppose you were making a system
to help storekeepers.  What's a shirt.  As Lenat points out, you ought
to know where they come from.  Clothing stores.  How do you know that.
I buy socks in the drug store around the corner.  How long do you wear
a shirt.  When it gets a stain, you can still wear it for fixing your
car.  Unless you know more or less "everything that every ordinary
person knows" you can't interact with them in a reasonable way,
understand what they say, or help them when they need help.  So, let's
try to getr some such data bases so that other researchers can use
them to make smart machines.

My associate here, Ken Haase, is doing some experiments to try to
learn some such stuff from reading text and then questioning people.
But it will be a while before we can evaluate his experiments.

sshankar@cs.umn.edu (Subash Shankar) (10/12/90)

It seems to me that optimizing compilers use many techniques which
could be considered AI techniques.  Does anybody know whether these
techniques historically rose from the AI or the compiler communities?

jon@buster.ddmi.com (Jon Havel) (10/12/90)

In article <1990Oct10.140751.11750@unislc.uucp> klb@unislc.uucp (Keith L. Breinholt) writes:
|>>
|>>	I know that this question may well start a big flame war, but
|>>I would like an idea of exactly what traditional AI has accomplished.
|>>

We'll, so far no one has given a "textbook" definition, so I'll venture
to give one from one of my AI textbooks.

   "Artificial intelligence is the study of mental faculties through
    the use of computational models"

["Artificial Intelligence", E. Charniak and D. McDermott, Addison Wesley,
  1987, pg 6]

This seems to be a very general definition which does not limit the 
study of AI to deveolping problem-solving system in the digital medium.

This definition also gives a hint that the study of AI can be carried out  
in many different disciplines [psychology, neurobiology, philosophy, etc].

mtanner@gmuvax2.gmu.edu (Michael C. Tanner) (10/13/90)

In article <27147BB0.11EA@marob.masa.com> cowan@marob.masa.com (John Cowan) writes:
  [ about how early automatic programming efforts produced an assembler ]

I don't know about this.  But once I heard Grace Murray Hopper speak and she
took credit for getting computers to understand English.  When asked what she
meant, she said COBOL.

--
Michael C. Tanner                         Assistant Professor
CS Dept                                   AI Center
George Mason Univ.                        Email: tanner@aic.gmu.edu
Fairfax, VA 22030                         Phone: (703) 764-6487

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/13/90)

In article <1712@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes:
>The way I understand it, though, language translation for things like
>weather reports is being dropped in favor of language generation. In
>language generation, the input is data about the weather (or whatever)
>in the form of machine-readable charts and so forth, and the output is
>a natural language document. By replacing the grammar and semantic
>database, one can generate English, French, Innuit, or whatever. This
>is much more successful than language translation, since translation
>requires a high degree of understanding of the two languages.
>
>Daryl McCullough

This is not true, at least as far as I know.  All forms of machine language
'use' are not very well funded but I don't think generation has really
begun replacing translation, so much as complementing it.  Translation
has been focusing more on the harder task of  dealing with real texts in 
identifiable genres, while generation could be used as you describe, I
just don't know of any examples.  Perhaps it is different in the 'real
world'.

--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/13/90)

In article <1990Oct12.192833.7783@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
> ...  All forms of machine language
>'use' are not very well funded but I don't think generation has really
>begun replacing translation, so much as complementing it.  Translation
>has been focusing more on the harder task of  dealing with real texts in 
>identifiable genres, while generation could be used as you describe, I
>just don't know of any examples.  Perhaps it is different in the 'real
>world'.

I don't have any hard facts, but have the impression that translation
is well funded in Japan, and in Europe, where many not-perfect systems
are in large scale use for rough translation, usually followed up by
human corrections.  I am told that this is very cost effective.

daryl@oravax.UUCP (Steven Daryl McCullough) (10/13/90)

In article <1990Oct12.192833.7783@watdragon.waterloo.edu>, cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
> In article <1712@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes:

> >This [natural language generation]
> >is much more successful than language translation, since translation
> >requires a high degree of understanding of the two languages.
> >
> >Daryl McCullough
> 
> This is not true, at least as far as I know.  All forms of machine language
> 'use' are not very well funded but I don't think generation has really
> begun replacing translation, so much as complementing it.  Translation
> has been focusing more on the harder task of  dealing with real texts in 
> identifiable genres, while generation could be used as you describe, I
> just don't know of any examples.  Perhaps it is different in the 'real
> world'.

I speak from the experience in my company. We have had a number of
successes at machine generation, but translation is beyond us. Some
examples:

(1) the GOSSIP system, which takes an operating system audit log
and generates a narrative (in English) description of the system usage
for the purpose of identifying suspicious behavior.

(2) the JOYCE system, which takes a graphical design of a large
distributed classified system and generates text documenting the
system and describing possible "covert channels" through which
classified information might be leaked.

Other projects I know of that are not connected with our company:

(3) a former consultant has a company, based in Montreal, which has
been generating weather reports from weather data in several
languages.

(4) the Boyer-Moore theorem prover (developed at the University of
Texas at Austin) has a program which automatically generates an
English description of the reasoning used in a proof.

Daryl McCullough

smoliar@vaxa.isi.edu (Stephen Smoliar) (10/13/90)

In article <3670@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu
(Marvin Minsky) writes:
>
>The problem with "building a large database of .. whatever it was
>working on" is, in my view, that this is why the expert systems have
>remained so limited and specialized.  Suppose you were making a system
>to help storekeepers.  What's a shirt.  As Lenat points out, you ought
>to know where they come from.  Clothing stores.  How do you know that.
>I buy socks in the drug store around the corner.  How long do you wear
>a shirt.  When it gets a stain, you can still wear it for fixing your
>car.  Unless you know more or less "everything that every ordinary
>person knows" you can't interact with them in a reasonable way,
>understand what they say, or help them when they need help.

I fear there is a tendency to underestimate the possible impact of these
observations.  We are so crazy about data bases that we tend to think that
they will solve all our problems if we just fill them up properly.  However,
it is not clear (to me, at least) that we really CAN build a data base which
will capture "everything that every ordinary person knows" . . . even about
the limited domain of shirts.  Our knowledge of shirts is very much a matter
of how we experience the world, in which we wear shirts, buy them, take them
to the laundry, and any number of other things, often dictated by the demands
of a specific situation.  (Had it been a warmer day, Walter Raleigh might not
have had a cloak, in which case I might have used his shirt, instead!)  Given
so much variety, I am not sure it makes sense to ask how much we can store away
about shirts in a data base, only to worry about how we are ever going to
retrieve any of it back and under what circumstances.  An alternative approach
is to ask what we need to know in order to behave properly when shirts are part
of the world around us.  Thus, we learn how to button up our shirts through a
process which evolves from having it done for us to doing it for ourselves.
(This reminds me of Minsky's observation--which I recently heard on CNN--that
a computer can take on the complexity of chess but not the simplicity of tying
shoe laces.)

As Minsky pointed out, we do not know how to build a learning program which
could "feed" CYC.  Perhaps we are worrying about the wrong kind of learning
program.  Perhaps it makes more sense to worry about learning patterns of
behavior, like buttoning a shirt or knowing when it is time to take it to
the cleaners.  Our preoccupation with neat declarative sentences (or entries
in a data base) tend to distract us from such questions of behavior;  but
perhaps that is a better front along which to attack issues of learning.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/14/90)

In article <3694@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>
>I don't have any hard facts, but have the impression that translation
>is well funded in Japan, and in Europe, where many not-perfect systems
>are in large scale use for rough translation, usually followed up by
>human corrections.  I am told that this is very cost effective.

Can anyone in Europe confirm or deny this?  I dimly recall hearing of
some increased effort in this area in connection with the "drive to
'92".  Btw, there are working programs up here which have proven
cost-effective, but research into improving generality is scarce...
--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

cam@aipna.ed.ac.uk (Chris Malcolm) (10/15/90)

In article <1990Oct11.143937.29160@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>In article <69604@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:

>>	And one may want to have some high-level way of specifying the
>>tasks that they are to perform. One should not need to specify each
>>little detail of their operation, anymore than we consciously specify
>>exactly which muscles to contract, and by how much.

>  It would be nice, but I think you've left the realm of traditional
>AI here.

Not at all. This has been a central concern of both robotics and
planning researchers from the earliest days. It could also be said that
your description applies to PROLOG, a language in which you do not
specify HOW to arrive at the answer, but WHAT IS TRUE of the answer.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

klb@unislc.uucp (Keith L. Breinholt) (10/15/90)

From article <69609@lll-winken.LLNL.GOV>, by loren@tristan.llnl.gov (Loren Petrich):

>>I think your problem is in recognizing the fruits of research done 10
>>to 20 years ago in "everyday" applications.
> 
> 	That's part of it.
> 
> 	Another part of it is that about the only software I have
> encountered that uses AI techniques (and I have done a lot of looking
> around at available software) are computer-algebra systems, certain
> computer games, and compilers.
>
> 	There don't seem to be too many readily available expert
> system shells, for instance.

Try OPS5, ESIE, and any number of "public domain" shells.  DEC
uses a system built on OPS5 (or was it R1) to configure every system
that they send out.

ExperSYS seems to have had some commercial success, as has TI's
Consultant.  TI has a group that helps industry users of their
products bring Expert Systems to fruitation.  When I went through the
AI series at the University of Utah we watched a number of tapes from
TI on (real) field applications.

> 	And Neural Nets, what I am working on now, are a field that is
> only recently reviving.

Someone correct me if I'm wrong, I though Neural Nets as an area of
study was only 5 or so years old.  In terms of research, 5 years is
baby technology.  If Neural Nets are consistent with other research it
won't make it into general public acceptance for another 5 to 10
years.

If you want to know where neural nets will appear in main stream
applications look at where it is accepted in research and any
successful, although currently obscure applications.

Someone else can better help you in this area, but the some
areas that you may look into are: Asynchronous circuits (GaAs?),
pattern (Vision) recognition systems, Parallel processors (Scheduling,
Comm. Routing, addition,...) and Robotics (Sensor feedback).

I hope that helps.

Keith L. Breinholt
Unisys, Unix Systems Group

kbreinho@peruvian.utah.edu or
hellgate.utah.edu!uplherc!unislc!klb

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/16/90)

In article <3271@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>In article <1990Oct11.143937.29160@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>>In article <69604@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>
>>>	And one may want to have some high-level way of specifying the
>>>tasks that they are to perform. One should not need to specify each
>>>little detail of their operation, anymore than we consciously specify
>>>exactly which muscles to contract, and by how much.
>
>>  It would be nice, but I think you've left the realm of traditional
>>AI here.
>
>Not at all. This has been a central concern of both robotics and
>planning researchers from the earliest days. It could also be said that
>your description applies to PROLOG, a language in which you do not
>specify HOW to arrive at the answer, but WHAT IS TRUE of the answer.

What you say is true in the context you quote, but my impression of the
subject of discussion in this particular case was to have something which
could take care of both program and data structures at a 'high' level.
Maybe GPS would be considered a better stab at this than PROLOG.  In 
retrospect, it is possible that I overestimated the 'high'ness meant by
Loren.  Sorry!

--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

loren@tristan.llnl.gov (Loren Petrich) (10/16/90)

In article <2127@anaxagoras.ils.nwu.edu> rose@ils.nwu.edu (Scott Rose) writes:
>In article <69607@lll-winken.LLNL.GOV>,  loren@tristan.llnl.gov (Loren
>Petrich) writes 
>>
>>	Pretty remarkable. I have not heard much of it. What are the
>>capabilities of Arthur Anderson's systems? Do they have anything to
>>help out in working out inference rules?
>
>Most of the work is built on expert systems workbenches such as those
>built by Aion and Inference.

	I wonder how convenient they are.

	I think it would be cute to illustrate their operation with
some simplified example. If anyone can, please do.
	

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

jwi@cbnewsj.att.com (Jim Winer @ AT&T, Middletown, NJ) (10/16/90)

Someone (not attributed when received) writes:

| ||I think your problem is in recognizing the fruits of research done 10
| ||to 20 years ago in "everyday" applications.

| Loren Petrich writes:
| 
| | 	That's part of it.

| | 	And Neural Nets, what I am working on now, are a field that is
| | only recently reviving.

Keith L. Breinholt writes:

| Someone correct me if I'm wrong, I though Neural Nets as an area of
| study was only 5 or so years old.  In terms of research, 5 years is
| baby technology.  If Neural Nets are consistent with other research it
| won't make it into general public acceptance for another 5 to 10
| years.

I worked on the Mark I Perceptron (Rosenblatt model) in 1959 
at Cornel Aeronautical Laboratories, Inc. (defunct) under contract
to Office of Naval Research (ONR). That makes the field at least
30 years old. Neural Nets have been inconvenient to work with until 
recently when specialized hardware has become available.

Jim Winer -- jwi@mtfme.att.com -- Opinions not represent employer.
------------------------------------------------------------------
"No, no: the purpose of language is to cast spells on other people ..."
								Lisa S Chabot
								

jlb3b@watt2.acc.Virginia.EDU (James Lewis Bander) (10/18/90)

In article <1990Oct15.143325.26044@unislc.uucp> klb@unislc.uucp (Keith L. Breinholt) writes:
>From article <69609@lll-winken.LLNL.GOV>, by loren@tristan.llnl.gov (Loren Petrich):
>
>> 	And Neural Nets, what I am working on now, are a field that is
>> only recently reviving.
>
>Someone correct me if I'm wrong, I though Neural Nets as an area of
>study was only 5 or so years old.  In terms of research, 5 years is
>baby technology.  If Neural Nets are consistent with other research it
>won't make it into general public acceptance for another 5 to 10
>years.
>

Okay.  I think you're wrong.  See, for example, McCullough and Pitts
"A logical calculus of the ideas immanent in neural nets." in
_Bulletin of Mathematical Biophysics_ volume 5, 1943.

Jim Bander
bander@virginia.edu

loren@tristan.llnl.gov (Loren Petrich) (10/19/90)

In article <1990Oct16.135631.6444@cbnewsj.att.com> jwi@cbnewsj.att.com (Jim Winer @ AT&T, Middletown, NJ) writes:
>
>I worked on the Mark I Perceptron (Rosenblatt model) in 1959 
>at Cornel Aeronautical Laboratories, Inc. (defunct) under contract
>to Office of Naval Research (ONR). That makes the field at least
>30 years old. Neural Nets have been inconvenient to work with until 
>recently when specialized hardware has become available.

	Specialized hardware?????

	Even that is still only in the experimental stage.

	Most Neural Nets now exist only in software form for the
traditional brand of computer. And it is on such software Neural Nets
that designs of hardware Neural Nets will ultimately depend -- it is
much easier to rewrite a program than to design a new chip. And even a
Neural Net chip would need to be controlled by such a computer.

	The simplicity of the basic algorithms keep making me wonder
why NN's did not take off earlier -- the basic code for one takes up
only a couple pages of Fortran or C. Try writing one yourself. I guess
that (in)famous book by Minsky and Papert, _Perceptrons_, with its
seemingly airtight theoretical arguments, is what had squelched the
field for so long.

	I wonder what the fellow who had worked on the Rosenblatt Mark
I Perceptron has to say about this question. What does he have to say
about the work of Minsky and Papert?


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (10/19/90)

In article <1990Oct16.135631.6444@cbnewsj.att.com> jwi@cbnewsj.att.com (Jim Winer @ AT&T, Middletown, NJ) writes:
>Keith L. Breinholt writes:
>
>| Someone correct me if I'm wrong, I though Neural Nets as an area of
>| study was only 5 or so years old.  In terms of research, 5 years is
>| baby technology.  If Neural Nets are consistent with other research it
>| won't make it into general public acceptance for another 5 to 10
>| years.
>
>I worked on the Mark I Perceptron (Rosenblatt model) in 1959 
>at Cornel Aeronautical Laboratories, Inc. (defunct) under contract
>to Office of Naval Research (ONR). That makes the field at least
>30 years old. Neural Nets have been inconvenient to work with until 
>recently when specialized hardware has become available.

Actually, the death of neural nets in the late sixties and the rebirth of
them a few years ago is a complex story.  Adalines, Perceptrons, and 
similar two-layer neural systems were developed, and actually proved
useful in limited was for signal processing.  The big limitation was
that with two feedforward layers of step-function or sigmoidal activation
functions, mappings from input to output could only be developed which
include areas divided by a single curve in the input space (i.e. 
functions like exclusive-OR could not be represented by the structure).
It was fairly obvious from very early neural models that "hidden layers,"
were required between the input and output neural layers.  
  Now, the perceptron learning rule was developed by agreeing on an error
function to be minimized (usually the sum of squares of differences between
actual outputs and desired outputs).  Training was done by moving along
the negative gradient of this error function, thus (usually) minimizing it.
However, while it is fairly obvious how to differentiate the error function
for a two-layer net, no one could work out how to differentiate the
error function for multiple layers.  Marvin Minsky made some comments on
the difficulty of this in _Perceptrons_, and alot of people lost interest
in these models.
   Eventually someone worked out how to find the error function gradient
for multiple layer networks.  It really isn't that hard to do, and I
don't understand what was so difficult about it.  I guess the difficult
concept was passing error back from the output layer to the hidden layer,
and prudent use of the chain rule.  Really, I wonder why it took so long
to work out.  Actually, I have a feeling some people did work it out in
the seventies, but after _Perceptrons_ perhaps people were just turned off
by NNs.  
  Finally with the publication of _Parallel_Distributed_Processing_,
everyone saw how easy it was to program a multi-layer perceptron,
and other NN structures such as Boltzman Machines.  At first, however,
mathematical failure of NN researchers #2 happened:  fixed step size
gradient descent wass used.  Anyone from mathematical sciences can tell
you that this is a silly way to minimize a function, and learning 
speedups of several orders of magnitude can easily be achieved with
conjugate-gradient and other more advanced minimization methods.
Thus people were lead to believe that even for very small problems,
NNs were slow, when infact they really are not.
  Now even recurrent neural networks can be trained, allowing NNs to have
temporal behavior.  
  But NN researchers are beginning to realize that training a big
homogeneous network is not the answer to good learning systems.
Modularlization is required.  Cascade-Correlation is a NN algorithm
which develops feature representations which can best help to reduce
the network error, and then these features are used to minimize the
network error.  It is able to solve many problems which were difficult
for homogenous NNs to solve.
  I see a future where inductive learning by small homogeneous NNs
is used in combination with more traditional AI type goal building.
Cascade-Correlation is a step in that direction.  Divide-and-conquer
of traditional AI is combined with the easy inductive learning of
traditional NNs.  Of course, the trick is to couch this in a
connectionist framework to continue to allow for fast parallel 
computation.

-Thomas Edwards

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/19/90)

In article <69929@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:

>	The simplicity of the basic algorithms keep making me wonder
>why NN's did not take off earlier -- the basic code for one takes up
>only a couple pages of Fortran or C. Try writing one yourself. I guess
>that (in)famous book by Minsky and Papert, _Perceptrons_, with its
>seemingly airtight theoretical arguments, is what had squelched the
>field for so long.

DAMMIT.  Try reading the book. What happened was that the field had
already flattened out, because, although Perceptrons could learn to
recognize certain patterns, they seemed unable to learn some other
kinds of patterns.  The book explicitly analyzes "three layer nets" --
input layer / coefficients / hidden layer / coefficients / and single
neuron output.  But, in fact, most theorems apply to unrestricted multilayer,
loop-free nets.  This does not seem to be well-known.  I assumed it
was obvious.

Since no one has found any errors in those "seemingly airtight
theoretical arguments", you should try to understand what point you're
missing!  It seems strange that I should have to do explain this in
comp.ai, at this late date.  "Perceptrons" explained that it will be
hard for such nets to recognize, for example, certain kinds of
group-invariant recognitions, without duplicating hardware for every
element of the group.

     EXAMPLE: in a simple 100 x 100 square retina, recognize all the
     images that could be reasonably described as depicting "A SQUARE
     INSIDE A CIRCLE".

Loren and others are absolutely right, in that the 80's showed that ML
(multilayer) nets could be made to learn many useful patterns.
"Perceptrons" was concerned with patterns that MLs couldn't learn, not
ones they could!!!!!!!!!!

So no collection of exciting stories of MLs learning things counters
the problems with what they can't learn -- like those distance
invariant relationships between parts of images.

In many cases, "successful" applications of MLs depend on
pre-processing a picture image, by first normalizing it in size, and
then centering it, before presenting it to the ML.  Fine - but don't
tell people that this refutes the Minsky-Papert theorems.  Instead,
now try todo that "circle-ionside-square" problem!  And then realize
that many real-world problems require multiple normalizations, which
cannot be pre-computed until you have picked out the sub-patterns.

In that connection, there is wisdom in Thomas G Edwards' remarks in
<6664@jhunix.HCF.JHU.EDU>:

  ... Cascade-Correlation is a NN algorithm which is able to solve
  many problems which were difficult for homogenous NNs to solve. ...
  I see a future where inductive learning by small homogeneous NNs
  is used in combination with more traditional AI type goal building.
  Cascade-Correlation is a step in that direction.  Divide-and-conquer
  of traditional AI is combined with the easy inductive learning of
  traditional NNs.  Of course, the trick is to couch this in a
  connectionist framework to continue to allow for fast parallel
  computation.

Divide-and-conquer is surely needed for circle-inside-square.  Note
that we still don't nkow how the brain does it.

Get with it, guys!  Of course there are many exciting things that can
be done with ML networks.  A good deal of the brain is made of them.
And there is a lot that require non-ML networks, and a lot of the
brain is non-ML.  Instead of bashing "Perceptrons", you should use it
as a model, and try to find more general statements about what ML and
other networks can do, and what are their limitations.

What we don't need are intemperate remarks like those in
<POLLACK.90Oct18014110@dendrite.cis.ohio-state.edu>, who seems to
deliberately misinterpret everything I have said in this group and
other places.  I don't know why he's so angry at me.

For example, in  one message to this group I said:

   "... Where is the "traditional, symbolic, AI in the brain"?  The
   answer seems to have escaped almost everyone on both sides of this
   great and spurious controversy!  The 'traditional AI' lies in the
   genetic specifications of those functional interconnections: the bus
   layout of the rel A large, perhaps messy software is there before your
   eyes, hiding in the gross anatomy.  Some 3000 "rules" about which
   sub-NN's should do what, and under which conditions, as dictated by
   the results of computations done in other NNs...."

Pollack replied, with this weird objection

   "I have to admit this is definitely a novel version of the
   homunculus fallacy: If we can't find him in the brain, he must be
   in the DNA! Of all the data and theories on cellular division and
   specialization and on the wiring of neural pathways I have come
   across, none have indicated that DNA is using means-ends analysis."

And then, he proceeded to make the same points that I have been
making, as though it were different from what I was saying:

   "Certainly, connectionist models are very easy to decimate when
   offered up as STRONG models of children learning language, of real
   brains, of spin glasses, quantum calculators, or whatever.  That is
   why I view them as working systems which just illuminate the
   representation and search processes (and computational theories) which
   COULD arise in natural systems.  There is plenty of evidence of
   convergence between representations found in the brain and backprop or
   similar procedures despite the lack of any strong hardware equivalence
   (Anderson, Linsker); constrain the mapping task correctly, and local
   optimization techniques will find quite similar solutions.

It is the same thing again.  Yes, you can find things nets do, but
it's like bad statistics in which you don't describe what you're
testing for until after the experiment is done.  Let's see an ML solve
circle-in-square.  Let's see one of Pollack's massively parallel
parsers solve circel in square.  Without any "strong hardware"
pre-figuring of the network.  In fact, Pollack's next paragraph begins
with

   "Furthermore, the representations and processes discovered by
connectionist models may have interesting scaling properties and can
be given plausible adaptive accounts."

Is he angry at me because the required scaling properties for human
visual perception are not among those posessed by the NN models he
advocates?  I don't know, by there must be some reason for his rage?
He finishes with,
 
   "On the other hand, I take it as a weakness of a theory of
   intelligence, mind or language if, when pressed to reveal its
   origin, shows me a homunculus, unbounded nativism, or some
   evolutionary accident with the same probability of occurrence as God.

Is this a paraphrase of the beginning of "Society of Mind", or does
Pollack think it is opposing it.  Come on Jordan.  We're on the same
side.  Yet you have been writing the most hostile and savage reviews
of my work.  What's the deal here?

ravula@glblview.uucp (Ramesh Ravula) (10/19/90)

Keith L. Breinholt writes:

| Someone correct me if I'm wrong, I though Neural Nets as an area of
| study was only 5 or so years old.  In terms of research, 5 years is
| baby technology.  If Neural Nets are consistent with other research it
| won't make it into general public acceptance for another 5 to 10
| years.

    Neural Networks as a field is at least 40 years old. The only thing is it
was not known by the name "Neural Networks".  In fact, the field has been there
for a longer time than the traditional AI. Marvin Minsky's and Seymour Papert's
"Perceptrons", published in 1969 was a snag in the development of the field.
The recent revival of the field started about a few years ago after J.J 
Hopfield (CalTech) conducted a study for National Academy of Sciences. As far 
as general public acceptance is concerned (whatever you mean by that) there are
many products/development systems in the market, and many major companies are
working diligently to bring more application products to the market place. Last
but not least, there is a news group which discusses all aspects of neural 
networks called comp.ai.neural-nets (In case you donot already know).

 Ramesh Ravula
 GE Medical Systems
 Mail W-826
 3200 N. Grandview Blvd.
 Waukesha, WI 53188.
--
email:    {att|mailrus|uunet|phillabs}!steinmetz!gemed!ravula
                                   or
          {att|uwvax|mailrus}!uwmcsd1!mrsvr!gemed!ravula

ssingh@watserv1.waterloo.edu (The Sanj - ISman (iceman)) (10/21/90)

In article <69929@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>
>	Most Neural Nets now exist only in software form for the
>traditional brand of computer. And it is on such software Neural Nets
>that designs of hardware Neural Nets will ultimately depend -- it is
>much easier to rewrite a program than to design a new chip. And even a
>Neural Net chip would need to be controlled by such a computer.

If you are saying that even PDP systems implementing neural network
learning procedures in hardware would still need a Von Neumann
computer to control it, such as how a Connection Machine still
is connected to a front end computer, I beg to differ. It seems that
we are caught up in the "homunculus" (sp?) dilemma, where we see an
NN as just a bunch of elements exchanging information. But unlike
a traditional algorithm of the type on serial computers, an NN
algorithm in hardware would _change itself_ as it learns. There should
be no need for an external authority; ie, my brain is just a few
billion neurons, there is no "homunculus" implementing emotion, free
will etc. I AM the net, and the net is me. No assembly required :-)
Amendments and corrections welcomed.

>	The simplicity of the basic algorithms keep making me wonder
>why NN's did not take off earlier -- the basic code for one takes up
>only a couple pages of Fortran or C. Try writing one yourself. I guess
>that (in)famous book by Minsky and Papert, _Perceptrons_, with its
>seemingly airtight theoretical arguments, is what had squelched the
>field for so long.

It seems like you are miffed at Minsky. Minsky and Pappert showed the limit-
ations of the Perceptron for a three layered net, I believe. They did not
say anything about larger numbers of layers. The researchers in the field
decided to pack up.
>$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov



-- 
"No one had the guts... until now..." -New Anti-Repression Convert (NARC)  
|-NARCotic $anjay [+] $ingh	ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca] -|
watserv1%rn alt.CENSORED: UW Provost sez "THINK SAFE THOUGHTS; AVOID NASTY ALT 
FEEDS; & PROTECT YOURSELF: WEAR A CONDOM ON YOUR HEAD. Call x-2809. Let's Talk. 

loren@tristan.llnl.gov (Loren Petrich) (10/22/90)

In article <3740@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>	[that I should read _Perceptrons_...]

	I found the book in the Berkeley library and tried reading it.
It was rather difficult to follow its arguments. I guess I should
check it out and work through it VERY carefully.

	But the impression I get, rightly or wrongly, is that its
principal conclusions are:

One layer of perceptron units can only distinguish between classes of
inputs separated by a hyperplane.

More layers of perceptron units can distinguish between much more
general classes of inputs.

There exist learning rules for one layer of perceptron units, but
there do not appear to be practical learning rules for more than one
layer.


	And that was seemingly that for perceptron-like architectures.


	I guess some algorithm like back-propagation looks simple --
after one discovers it. But it does seem easy to generalize the
two-state output of the original perceptrons to a continuous-valued
output, from which the back-prop algorithm readily follows from
minimizing the quantity <|actual - calculated|>. I wonder if anyone
had ever considered continuous-output perceptrons in the early days of
the field.


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (10/24/90)

In article <3740@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>In that connection, there is wisdom in Thomas G Edwards' remarks in
><6664@jhunix.HCF.JHU.EDU>:
>>  I see a future where inductive learning by small homogeneous NNs
>>  is used in combination with more traditional AI type goal building.

>Divide-and-conquer is surely needed for circle-inside-square.  Note
>that we still don't nkow how the brain does it.

One leader in the strategy to create connectionist structures capable
of divide-and-conquer style compositional learning is Jeurgen
Schmidhuber of T.U.M.  He imagines compositional learneds utilizing
three kinds of networks.

One is a program executer, which receives as input a start
situation, a (sub)goal situation, and external senses.  This module
produces output which allows a robot to achieve the transformation
from the start situation to the (sub)goal situation.

The second structure is an evaluator, which receives a start situation
and (sub)goal situation as input.  It produces output which indicates
whether there is a program executer network which can perform the
transformation from start to (sub)goal states.

The third structure is a subgoal generator.  It receives as input a
start situation and a goal situation.  It produces a subgoal
as an output.

All networks are continually running, recurrent neural networks.

The subgoal generator is trained by applying a start and goal input to
it, and also applying the start state and output from the generator
to one evaluator network, and the output from the generator and the
goal state to another evaluator network.  The subgoal generator is
trained until the output of the two evaluator networks indicate there
is a program executer which can perform the transition from the
start state to the subgoal from the generator, and from that
subgoal to the goal (or the net is trained until a local minimum is
found, indicating that a new program executer(s) has to be
developed to solve the problem).

Obviously there needs to be much more work in this area.
It is important that connectionist researchers get as familiar
with training continually running recurrent neural networks with
both supervised and unsupervised methods as they are with 3 layer
feedforward backprop style networks.  Also they must get above 
any "me vs. them" feelings they have with symbolic AI and look carefully
at the huge amount of machine learning theory which has already been
developed.  

Ref:  J. H. Schmidhuber.  Towards compositional learning with
       dynamic neural networks.  Technical Report FKI-129-90,
       Institut fuer Informatik, Technische Universitat Munchen, 1990.

-Thomas Edwards
 (P.S.: Looking for Ph.D. programs in robotics and electrical
        engineering for fall '91)

sfp@stc06.ornl.gov (SPELT P F) (10/25/90)

The original poster of this piece thought nets research was only about 5
years old;  someone suggested s/he look at McCullough & Pitts (1941, I
believe).  Also S. Grossberg, now of Boston University, has been
publishing work in neural netwrorks, as simulations of actual brain
processes, since 1967, I believe.  Muchg of his labor was carried on in
anonynimity, but he has more recently begun to be widely recognized,
along with several others in his group at BU.

zed@mdbs.uucp (Bill Smith) (10/28/90)

In article <69929@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>In article <1990Oct16.135631.6444@cbnewsj.att.com> jwi@cbnewsj.att.com (Jim Winer @ AT&T, Middletown, NJ) writes:
>>
>>I worked on the Mark I Perceptron (Rosenblatt model) in 1959 
>>at Cornel Aeronautical Laboratories, Inc. (defunct) under contract
>>to Office of Naval Research (ONR). That makes the field at least
>>30 years old. Neural Nets have been inconvenient to work with until 
>>recently when specialized hardware has become available.

I guess people just don't count the value of wet ware, now do they?

>
>	Specialized hardware?????

A person sure seems specialized to me.
>
>	Even that is still only in the experimental stage.

Hah!   Pure poppy cock. Lies! Blasphemy before God!
>
>	Most Neural Nets now exist only in software form for the
>traditional brand of computer. And it is on such software Neural Nets
>that designs of hardware Neural Nets will ultimately depend -- it is
>much easier to rewrite a program than to design a new chip. And even a
>Neural Net chip would need to be controlled by such a computer.

FuckF*ck you!
>
>	The simplicity of the basic algorithms keep making me wonder
>why NN's did not take off earlier -- the basic code for one takes up
>only a couple pages of Fortran or C. Try writing one yourself. I guess
>that (in)famous book by Minsky and Papert, _Perceptrons_, with its
>seemingly airtight theoretical arguments, is what had squelched the
>field for so long.
>
Airtight theoretical arguments are to life as a vacuum is to a toy balloon.
Academia is an accretion of SHITsh*t.

>	I wonder what the fellow who had worked on the Rosenblatt Mark
>I Perceptron has to say about this question. What does he have to say
>about the work of Minsky and Papert?
>
Good question.   I want to see then answer posted next week.
>
>$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov
>
>Since this nodename is not widely known, you may have to try:
>
>loren%sunlight.llnl.gov@star.stanford.edu
Thank you for your idiotic approach to what is a simple problem.

God (Obviously, I am lying)
pur-ee!mdbs!zed

zed@mdbs.uucp (Bill Smith) (10/28/90)

In article <6664@jhunix.HCF.JHU.EDU> ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) writes:
>In article <1990Oct16.135631.6444@cbnewsj.att.com> jwi@cbnewsj.att.com (Jim Winer @ AT&T, Middletown, NJ) writes:
>>Keith L. Breinholt writes:
>>
>>| Someone correct me if I'm wrong, I though Neural Nets as an area of
>>| study was only 5 or so years old.  In terms of research, 5 years is
>>| baby technology.  If Neural Nets are consistent with other research it
>>| won't make it into general public acceptance for another 5 to 10
>>| years.
And God is a 2 year old.
>>
>>I worked on the Mark I Perceptron (Rosenblatt model) in 1959 
>>at Cornel Aeronautical Laboratories, Inc. (defunct) under contract
>>to Office of Naval Research (ONR). That makes the field at least
>>30 years old. Neural Nets have been inconvenient to work with until 
>>recently when specialized hardware has become available.
Inconvenient for Vulcan's perhaps....  But they're imaginary like the
rest of the Artificial Intelligent bullshit
sh*t.
>
>Actually, the death of neural nets in the late sixties and the rebirth of
>them a few years ago is a complex story.  
Death is a myth.  However, if one is dead to God, tough luck asshole.
Ask Ken Forbus.
>Adalines, Perceptrons, and 
>similar two-layer neural systems were developed, and actually proved
>useful in limited was for signal processing.  
Proof is a myth.  If proof is required, the idea is too complex for
even the simplest of electrons to understand.  If an electron can't
understand, how do you expect him (or her) (or it) (or they) (or Jeff, the
electron's real name) (the reason they are all the same is they are all
Jeff.  Every material thing that normal lunkhead's deal with is made of
Jeff, Joyce, Bruce and the two kids David and Ginger.  I should know,
they are my cousins.
>The big limitation was
>that with two feedforward layers of step-function or sigmoidal activation
>functions, mappings from input to output could only be developed which
>include areas divided by a single curve in the input space (i.e. 
>functions like exclusive-OR could not be represented by the structure).
Speak english not vulcan.  Vulcan is the language of Hell, which is a
fine thing, since that's where you're destined.
>It was fairly obvious from very early neural models that "hidden layers,"
>were required between the input and output neural layers.  
A hidden layer is a non-existent layer.
>  Now, the perceptron learning rule was developed by agreeing on an error
>function to be minimized (usually the sum of squares of differences between
>actual outputs and desired outputs).  
*The* perceptron (Ted) (the only one, you know) is a hippie.  He's willing
to do anything if it looks fun.
>Training was done by moving along
>the negative gradient of this error function, thus (usually) minimizing it.
Ted is one smart guy.  I never knew it until you said this.
>However, while it is fairly obvious how to differentiate the error function
>for a two-layer net, no one could work out how to differentiate the
>error function for multiple layers.  
One should learn how to differenitate d(t)*e(d(t)) first.  (no spelling
errors).
>Marvin Minsky made some comments on
>the difficulty of this in _Perceptrons_, and alot of people lost interest
>in these models.
Whoa!  Who is this Marvin Dude that he thinks he can write a biography of
Ted, who hasn't even been born yet....  
(Well, maybe he's been born, but he hasn't become rich and famous like
he deserves.)
>   Eventually someone worked out how to find the error function gradient
>for multiple layer networks.  
And, pray tell, does it involve complex arithmetic?
>It really isn't that hard to do, and I
>don't understand what was so difficult about it.  
Difficulty is like religion, it's a cult of the foolish.
>I guess the difficult
>concept was passing error back from the output layer to the hidden layer,
>and prudent use of the chain rule.  
Have you guy's ever studied EE?  This is called "Systems Theory" and
is trivial to any graduate EE who's understood the course.
>Really, I wonder why it took so long
>to work out.  
Probably because they were using a pencil instead of a lavatory.
>Actually, I have a feeling some people did work it out in
>the seventies, but after _Perceptrons_ perhaps people were just turned off
>by NNs.  
I'm turned off by MM's, but then I'm just wierd. (or is that wired?)
>  Finally with the publication of _Parallel_Distributed_Processing_,
>everyone saw how easy it was to program a multi-layer perceptron,
>and other NN structures such as Boltzman Machines.  At first, however,
>mathematical failure of NN researchers #2 happened:  fixed step size
>gradient descent wass used.  Anyone from mathematical sciences can tell
>you that this is a silly way to minimize a function, and learning 
>speedups of several orders of magnitude can easily be achieved with
>conjugate-gradient and other more advanced minimization methods.
>Thus people were lead to believe that even for very small problems,
>NNs were slow, when infact they really are not.
>  Now even recurrent neural networks can be trained, allowing NNs to have
>temporal behavior.  
>  But NN researchers are beginning to realize that training a big
>homogeneous network is not the answer to good learning systems.
>Modularlization is required.  Cascade-Correlation is a NN algorithm
>which develops feature representations which can best help to reduce
>the network error, and then these features are used to minimize the
>network error.  It is able to solve many problems which were difficult
>for homogenous NNs to solve.
>  I see a future where inductive learning by small homogeneous NNs
>is used in combination with more traditional AI type goal building.
>Cascade-Correlation is a step in that direction.  Divide-and-conquer
>of traditional AI is combined with the easy inductive learning of
>traditional NNs.  Of course, the trick is to couch this in a
>connectionist framework to continue to allow for fast parallel 
>computation.
>
>-Thomas Edwards

Why do I waste my time with you people.   Take some EE.  Read some poetry.
Try as you might, you won't understand it.  Not only that, Vogon's *write*
better poetry.  I guess that means that you are all just really the
same thing: Cynthia Fitzmelton.  What a poor woman.  She knew the answer
and then the Vogon's destroyed the planet.  They were just jealous because
they found out they couldn't write the *worst* poetry after all.

God (Obviously, I am lying)

zed@mdbs.uucp (Bill Smith) (10/28/90)

Growth is slow, but it builds sturdy foundations.
Voices are weak yet they overcome evil.
Society is concrete but it is a fluid home for members.
Progress is finite yet it touches the infinite. 

		wws, sept 12 1988
==============================================================================
In article <3740@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>In article <69929@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>
>>	The simplicity of the basic algorithms keep making me wonder
>>why NN's did not take off earlier -- the basic code for one takes up
>>only a couple pages of Fortran or C. Try writing one yourself. I guess
>>that (in)famous book by Minsky and Papert, _Perceptrons_, with its
>>seemingly airtight theoretical arguments, is what had squelched the
>>field for so long.
>
>DAMMIT.  Try reading the book. What happened was that the field had
>already flattened out, because, although Perceptrons could learn to
>recognize certain patterns, they seemed unable to learn some other
>kinds of patterns.  The book explicitly analyzes "three layer nets" --
>input layer / coefficients / hidden layer / coefficients / and single
>neuron output.  But, in fact, most theorems apply to unrestricted multilayer,
>loop-free nets.  This does not seem to be well-known.  I assumed it
>was obvious.
That's right.  Curse at what you don't understand.   The Jews did it
in the first century A.D.  I guess it makes sense that they've continued
to do it till today.
>
>Since no one has found any errors in those "seemingly airtight
>theoretical arguments", you should try to understand what point you're
>missing!  It seems strange that I should have to do explain this in
>comp.ai, at this late date.  "Perceptrons" explained that it will be
>hard for such nets to recognize, for example, certain kinds of
>group-invariant recognitions, without duplicating hardware for every
>element of the group.
"Group invariant?!?!!!"  Do you even *know* the mathematical definition
of a Group?   A Group is a set of elements and 1 binary operator that
is closed under that operation.  Such a simple idea, yet it couldn't be
more complicated in reality.
"Duplicating Hardware!?!???"  What about recycling, saving the environment
and life, happiness and the American Way?  Have you ever heard of a
crisis in faith that is shaking the morally dead of the earth?  But, since
they are dead, it don't matter that they will all die (permanently) 
within a year or 10.
>
>     EXAMPLE: in a simple 100 x 100 square retina, recognize all the
>     images that could be reasonably described as depicting "A SQUARE
>     INSIDE A CIRCLE".
What a good idea!  The greeks loved it.  Especially Alpha Chi Rho.
>
>Loren and others are absolutely right, in that the 80's showed that ML
>(multilayer) nets could be made to learn many useful patterns.
>"Perceptrons" was concerned with patterns that MLs couldn't learn, not
>ones they could!!!!!!!!!!
And as such it is a complete work of unadulterated academia and should
be flushed down a toilet, except that the toilet would back up, a plumber
would have to be called and the department head would sure have some
funny looks to make about the whole funny incident.  It would be so funny,
and he would be such a perfect academic that he wouldn't get the joke.
Herman Rubin should try this on his lunch hour October 30, 1990 as an
April Fools Joke (postponed)  If he get's fired, well, it just as well,
because there are better things awaiting him.
>
>So no collection of exciting stories of MLs learning things counters
>the problems with what they can't learn -- like those distance
>invariant relationships between parts of images.
Oh get a life!
>
>In many cases, "successful" applications of MLs depend on
>pre-processing a picture image, by first normalizing it in size, and
>then centering it, before presenting it to the ML.  Fine - but don't
>tell people that this refutes the Minsky-Papert theorems.  Instead,
>now try todo that "circle-ionside-square" problem!  And then realize
>that many real-world problems require multiple normalizations, which
>cannot be pre-computed until you have picked out the sub-patterns.
Theorems are, by definition incomplete.   If a theorem were complete,
It would explain why God is gay.
>
>In that connection, there is wisdom in Thomas G Edwards' remarks in
><6664@jhunix.HCF.JHU.EDU>:
>
>  ... Cascade-Correlation is a NN algorithm which is able to solve
>  many problems which were difficult for homogenous NNs to solve. ...
>  I see a future where inductive learning by small homogeneous NNs
>  is used in combination with more traditional AI type goal building.
>  Cascade-Correlation is a step in that direction.  Divide-and-conquer
>  of traditional AI is combined with the easy inductive learning of
>  traditional NNs.  Of course, the trick is to couch this in a
>  connectionist framework to continue to allow for fast parallel
>  computation.
>
Quoting scripture will get you a dinner of your works when the revolution
comes.  Not only that, but it will be so well seasoned and tenderly
baked that you will even enjoy the aroma of the decaying ink.
>Divide-and-conquer is surely needed for circle-inside-square.  Note
>that we still don't nkow how the brain does it.
I don't "nkow" how the brain does it either.  I nkow all information in
binary decisions: yes/no up/down in/out  live/die Q-bit/R-bit cubit/metre
happy/sade (say, the Marquis was a knower of things you don't even imagine)

Oh what I would nkow to conquer you assholes so that they could be
divided into perfect lattices of neural networks able to calculate Pi
at a speed that makes the Cray computers look like the stone tablets
of Sumerians that they are.  You didn't know that Cray is Iraqi like
the rest of the civilized world, did you?
>
>Get with it, guys!  Of course there are many exciting things that can
>be done with ML networks.  A good deal of the brain is made of them.
>And there is a lot that require non-ML networks, and a lot of the
>brain is non-ML.  Instead of bashing "Perceptrons", you should use it
>as a model, and try to find more general statements about what ML and
>other networks can do, and what are their limitations.
Fortunately, I can do just fine without a brain.  I've managed to
beat it into submission with an awl and meat hook.  Ok, so I made that
part up.  What will you do?  Censor me?
>
>What we don't need are intemperate remarks like those in
><POLLACK.90Oct18014110@dendrite.cis.ohio-state.edu>, who seems to
>deliberately misinterpret everything I have said in this group and
>other places.  I don't know why he's so angry at me.
I am angry too.  I don't blame him.  An asshole is an asshole. Pure and
simple.  and you are just one of them.
>
>For example, in  one message to this group I said:
>
>   "... Where is the "traditional, symbolic, AI in the brain"?  The
>   answer seems to have escaped almost everyone on both sides of this
>   great and spurious controversy!  The 'traditional AI' lies in the
>   genetic specifications of those functional interconnections: the bus
>   layout of the rel A large, perhaps messy software is there before your
>   eyes, hiding in the gross anatomy.  Some 3000 "rules" about which
>   sub-NN's should do what, and under which conditions, as dictated by
>   the results of computations done in other NNs...."
>
>Pollack replied, with this weird objection
>
>   "I have to admit this is definitely a novel version of the
>   homunculus fallacy: If we can't find him in the brain, he must be
>   in the DNA! Of all the data and theories on cellular division and
>   specialization and on the wiring of neural pathways I have come
>   across, none have indicated that DNA is using means-ends analysis."
>
>And then, he proceeded to make the same points that I have been
>making, as though it were different from what I was saying:
>
>   "Certainly, connectionist models are very easy to decimate when
>   offered up as STRONG models of children learning language, of real
>   brains, of spin glasses, quantum calculators, or whatever.  That is
>   why I view them as working systems which just illuminate the
>   representation and search processes (and computational theories) which
>   COULD arise in natural systems.  There is plenty of evidence of
>   convergence between representations found in the brain and backprop or
>   similar procedures despite the lack of any strong hardware equivalence
>   (Anderson, Linsker); constrain the mapping task correctly, and local
>   optimization techniques will find quite similar solutions.
>
>It is the same thing again.  Yes, you can find things nets do, but
>it's like bad statistics in which you don't describe what you're
>testing for until after the experiment is done.  Let's see an ML solve
>circle-in-square.  Let's see one of Pollack's massively parallel
>parsers solve circel in square.  Without any "strong hardware"
>pre-figuring of the network.  In fact, Pollack's next paragraph begins
>with
Oh, so you are a statistiction.  You have a brilliant, but short
career ahead of you.
>
>   "Furthermore, the representations and processes discovered by
>connectionist models may have interesting scaling properties and can
>be given plausible adaptive accounts."
>
>Is he angry at me because the required scaling properties for human
>visual perception are not among those posessed by the NN models he
>advocates?  I don't know, by there must be some reason for his rage?
>He finishes with,
> 
>   "On the other hand, I take it as a weakness of a theory of
>   intelligence, mind or language if, when pressed to reveal its
>   origin, shows me a homunculus, unbounded nativism, or some
>   evolutionary accident with the same probability of occurrence as God.
>
>Is this a paraphrase of the beginning of "Society of Mind", or does
>Pollack think it is opposing it.  Come on Jordan.  We're on the same
>side.  Yet you have been writing the most hostile and savage reviews
>of my work.  What's the deal here?

Know I nkow that you are on different sides.  Jordan is good.  You are not.

Hebrews are not the children of God.  They are an angry, stiff necked
race that God chose as his Own only because He is Himself a Hebrew.
An asshole is a great thing to be.  Lucky you are that you are one and
don't just have one.  (propriety limits my further comments on this subject.)

God (Yes, I am lying)(Obviously)

Don't forget to pray for the 6,000,000 who were saved by God from the
reign of Harry Truman, King of Japan.  From LBJ, a Texan of profound
self-esteem in his own mind.  From King Nixon, the man who kept the
Cambodians enslaved and murdered each of them (personally and without
regret)  From King Ronald, clown for a day, King of Japan.  King George IV
of Kennebunkport, whom God loves and will allow to amend for his valiant
service in the military.

Death is too good for "men" such as these.  They must,  They must, 
They must,  They must make amends.

zed@mdbs.uucp (Bill Smith) (10/28/90)

In article <70159@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>In article <3740@media-lab.MEDIA.MIT.EDU> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>>	[that I should read _Perceptrons_...]
>
>	I guess some algorithm like back-propagation looks simple --
>after one discovers it. But it does seem easy to generalize the
>two-state output of the original perceptrons to a continuous-valued
>output, from which the back-prop algorithm readily follows from
>minimizing the quantity <|actual - calculated|>. I wonder if anyone
>had ever considered continuous-output perceptrons in the early days of
>the field.

Literally, this is a EE problem in the field of Signals and Systems.  
You have a differential equation (possibly multivariate) that describes
the system.  Any EE will tell you that you want the system to be linear
because it simplifies life incredibly.

In the simplest case, it is well understood and a child of 2 can 
understand the basic principles.  It can be made more and more complicated 
by adding arithmetic, simultaneous exquations, linear algebra, differential 
equations, complex mathematics, time delays, non-linear effects, ad 
infinitum until the problem you started out to solve with has been 
sufficiently impressed with your knowledge (or the MIP rating of your 
computer if that fails :-) that is capitulates into the universal 
equation:  x=1 (or x=0 as the case may be.)  If x=0 you had a trivial 
problem that you should try to find out why it was so hard for you to 
see from the start that it was trivial.  If x=1, you know know 
something you didn't know before.  Extrapolate backwards through 
your steps until you find out the answers to the original question.

In life there are *no* trivial problems, therefore, the answer to all problems
is x=1.   Now, you have to find out what is different between the problem
you have and x=1 so that you can extrapolate backwards.  This is pure
philosophy.  I am glad you have asked your questions because now *I* 
understand myself better than I did before.  This is the benefit of 
answering other peoples problems: you solve 2 problems at the same time,
their problem and one of your own.  However, if you find out that they did
not want the answer to the problem they asked but instead the answer to some
other problem you will have 4 problems on your hand:  2 real and 2 imaginary.
The real problems are their original problem and the problem that you created
for yourself with your solution.  The imaginary problems are the one
that they asked and the one that you created by solving a problem in 
yourself that did not exist to begin with.

Theses are the fundamental theorems of boolean arithemetic and complex 
analysis:

	1 + 1 = 0 (boolean)
	i + 1 + -1 + -i = 0 (complex)

The whole of boolean arithmetic is that life's problems are real.
The whole of complex analysis is that imaginary problems are complex.


>$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov
>
>Since this nodename is not widely known, you may have to try:
>
>loren%sunlight.llnl.gov@star.stanford.edu

Thank you Loren.  You have made me happy.

Bill Smith
pur-ee!mdbs!zed

schwartz@unix.cis.pitt.edu (Michael A Schwartz) (10/29/90)

Does anyone know of any AI programs used for foreign language learning and/or
teaching?

loren@tristan.llnl.gov (Loren Petrich) (10/30/90)

In article <1990Oct27.194719.1005@mdbs.uucp> zed@mdbs.UUCP (Bill Smith) writes:
>>>... Neural Nets have been inconvenient to work with until 
>>>recently when specialized hardware has become available.
>
>I guess people just don't count the value of wet ware, now do they?

	Do they have to?

	The point is to try to set up pattern recognition on a system
that will never get tired, and that will be influenced by preconceived
ideas as little as possible.

>>	Specialized hardware?????
>
>A person sure seems specialized to me.

	We're talking about artificial, not biological systems here.

>>	Even that is still only in the experimental stage.
>
>Hah!   Pure poppy cock. Lies! Blasphemy before God!

	Why is that so?

	Are you serious??

>>	Most Neural Nets now exist only in software form for the
>>traditional brand of computer. And it is on such software Neural Nets
>>that designs of hardware Neural Nets will ultimately depend -- it is
>>much easier to rewrite a program than to design a new chip. And even a
>>Neural Net chip would need to be controlled by such a computer.
>
>FuckF*ck you!

	What's so TERRIBLE????

	I still don't get what you are getting at.

	I'm only stating something derived from my personal experience
in the NN field.

>>	The simplicity of the basic algorithms keep making me wonder
>>why NN's did not take off earlier -- the basic code for one takes up
>>only a couple pages of Fortran or C. Try writing one yourself. I guess
>>that (in)famous book by Minsky and Papert, _Perceptrons_, with its
>>seemingly airtight theoretical arguments, is what had squelched the
>>field for so long.
>>
>Airtight theoretical arguments are to life as a vacuum is to a toy balloon.
>Academia is an accretion of SHITsh*t.

	Try finding two even numbers that add up to an odd one
sometime.

	Or try to patent a perpetual-motion machine.

	And maybe then you will not laugh so hard at theoretical
arguments.

>Thank you for your idiotic approach to what is a simple problem.

	What's this all about???

	Is this a joke??????

>God (Obviously, I am lying)

	No comment.

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu