[comp.ai] Time Magazine -- Computers of the Future

klee@daisy.UUCP (Ken Lee) (03/28/88)

In article <8803270154.AA08607@bu-cs.bu.edu> bzs@BU-CS.BU.EDU (Barry Shein) writes:
>
>The cover article this week for Time Magazine is "Computers of the
>Future". Mainly what they mean by that are the two paths
>supercomputing and artifical intelligence (eg. neural networking.)
>
>I haven't finished it, it seems fun, some errors of course but you
>don't expect extreme technical accuracy from such an article anyhow.

What do people think of the PRACTICAL future of artificial intelligence?
For a while, it seemed like it was going to take off.  All sorts of
expert systems and tools houses started to appear.  Most of these are
bankrupt now.  Even the biggies like Symbolics, Teknowledge, and
IntelliCorp are having major trouble.  The only companies that are
successful are Star Wars contractors, and I'm not sure if that's what
you'd call a practical application.

Is AI just too expensive and too complicated for practical use?  I
spent 3 years in the field and I'm beginning to think the answer is
mostly yes.  In my opinion, all working AI programs are either toys or
could have been developed much more cheaply using conventional
techniques.

Why is AI expensive?  No matter how good our theoretical inference or
representation techniques get, we still have the practical problem of
extracting human knowledge transfering it to the machine.  The problem
I see is that every application is sufficiently unique to preclude
automated knowledge engineering.  Since you must solve it by hand
anyway, you could probably more effiently program it using conventional
techniques.

Does AI have any advantage over conventional programming?  There are
claims about the benefits of learning systems, meta-level algorithms,
abstraction, etc., but for these to be cost effective, they must be
developed in an application-independent fashion.  Is this possible?
I don't think so, at least not in the near future.

I'm probably being short-sighted and ignoring the (very) long-term
possibilities.  There are also side-effects, like the popularizing of
object-oriented programming and powerful programming environments,
that are very practical.  Maybe I'm just looking at AI too broadly and
not at specific application areas.

What do you thing?  Thanks for your thoughts.  No, I'm not an AI trying
to clone you.

Ken
-- 
What's the difference between a used car salesman and a computer salesman?
The used car salesman knows when he's lying.

sbrunnoc@eagle.ulowell.edu (Sean Brunnock) (03/28/88)

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?
>
>Is AI just too expensive and too complicated for practical use?  I
>
>Does AI have any advantage over conventional programming?  

   Bear with me while I put this into a sociological perspective. The first
great "age" in mankind's history was the agricultural age, followed by the
industrial age, and now we are heading into the information age. The author
of "Megatrends" points out the large rise in the number of clerks as
evidence of this. 

   The information age will revolutionize agriculture and industry just as
industry revolutionized agriculture one hundred years ago. Industry gave to
the farmer the reaper, cotton gin, and a myriad of other products which
made his job easier. Food production went up an order of magnitude and by
the law of supply and demand, food became less valuable and farming became
less profitable.

   The industrial age was characterized by machines that took a lot of
manual labor out of the hands of people. The information age will be
charcterized by machines that will take over mental tasks now accomplished
by people.
   
   For example, give a machine access to knowledge of aerodynamics,
engines, materials, etc. Now tell this machine that you want it to
design a car that can go this fast, use this much fuel per mile, cost
this much to make, etc. The machine thinks about it and out pops a 
design for a car that meets these specifications. It would be the
ultimate car with no room for improvement (unless some new scientific
discovery was made) because the machine looks at all of the possibilities.
These are the types of machines that I expect AI to make possible
in the future.

   I know this is an amateurish analysis, but it convinces me to study
AI.

   As for using AI in conventional programs? Some people wondered what
was the use of opening up a trans-continental railroad when the pony 
express could send the same letter or package to where you wanted in just
seven days. AI may be impractical now, but we have to keep making an effort
at it.


       Sean Brunnock
       University of Lowell
       sbrunnoc@eagle.cs.ulowell.edu

jbn@glacier.STANFORD.EDU (John B. Nagle) (03/29/88)

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>industrial age, and now we are heading into the information age. The author
>of "Megatrends" points out the large rise in the number of clerks as
>evidence of this. 

      The number of office workers in the U.S. peaked in 1985-86 and has 
declined somewhat since then.  White collar employment by the Fortune 500
is down substantially over the last five years.  The commercial real estate
industry has been slow to pick up on this, which is why there are so many
new but empty office buildings.  The new trend is back toward manufacturing.
You can't export services, except in a very minor way.  (Check the numbers
on this; they've been published in various of the business magazines and
can be obtained from the Department of Commerce.)

>   For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a 
>design for a car that meets these specifications. It would be the
>ultimate car with no room for improvement (unless some new scientific
>discovery was made) because the machine looks at all of the possibilities.

      Wrong.  Study some combinatorics.  Exhaustive search on a problem like
that is hopeless.  The protons would decay first.

					John Nagle

cwp@otter.hple.hp.com (Chris Preist) (03/29/88)

Whatever the future of AI is, it's almost certainly COMPANY CONFIDENTIAL!

:-)                       Chris


Disclaimer:
In this case, the opinion expressed probably IS the opinion of my employer!

cdfk@otter.hple.hp.com (Caroline Knight) (03/29/88)

Whatever the far future uses of AI are we can try to make the
current uses as humane and as ethical as possible. I actually
believe that AI in its current form should complement humans 
not make them redundant. It should increase the skill of the 
person doing the job by doing those things which are boring
or impractical for humans but possible for computers.

This is the responsibility mostly of people doing applications
but can also form the focus of research. When sharing a job
with a computer which tasks are best automated and which best 
given to the human - not just which is it possible to automate!
Then the research can move on to how to automate those that it
is desirable to have autmoated instead of simply trying to show 
how clever we all are in mimicking "intelligence".

Perhaps computers will free people up so that they can go back
to doing some of the tasks that we currently have machines do
- has anyone thought of it that way?  

And if we are going to do people out of jobs then we'd better
start understanding that a person is still valuable even if 
they do not do "regular work". 
How can AI actually improve life for 
those that are made jobless by it? Can we improve on previous
revolutions by NOT treading rough shod over the people that 
are displaced?

Either that or prepare to give up our world to the machines -
perhaps thats why we are not looking after it very carefully!

Caroline Knight

What I say is said on my own behalf - it is not a statement of
company policy.

rwojcik@bcsaic.UUCP (Rick Wojcik) (03/30/88)

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>
>Is AI just too expensive and too complicated for practical use?  I
>spent 3 years in the field and I'm beginning to think the answer is
>mostly yes.  In my opinion, all working AI programs are either toys or
>could have been developed much more cheaply using conventional
>techniques.
>
Your posting was clearly intended to provoke, but I'll try to keep the
flames low :-).  Please try to remember that AI is a vast subject area.
It is expensive because it requires a great deal of expertise in language,
psychology, philosophy, etc.--not just programming skills.  It is also a
very high risk area, as anyone can see.  But the payoff can be
tremendous.  Moreover, your opinion that conventional techniques can
replace AI is ludicrous.  Consider the area of natural language.  What
conventional techniques that you know of can extract information from
natural language text or translate a passage from English to French?
Maybe you believe that we should stop all research on robotics.  If not,
would you like to explain how conventional programming can be used get
robots to see objects in the real world?  But maybe we should give up on
the whole idea.  We can replace robots with humans.  Would you like to
volunteer for the bomb squad :-)?  In the development stage, AI is expensive,
but in the long term it is cost effective.  Your pessimism about the field
seems to be based on the failure of expert systems to live up to the hype.
The future of AI is going to be full of unrealistic hype and disappointing
failures.  But the demand for AI is so great that we have no choice but to
push on.
-- 
Rick Wojcik   csnet:  rwojcik@boeing.com	   
              uucp:  {uw-june  uw-beaver!ssc-vax}!bcsaic!rwojcik 
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844

jsnyder@june.cs.washington.edu (John Snyder) (03/31/88)

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>...  But the demand for AI is so great that we have no choice but to
>push on.

We always have the choice not to develop a technology; what may be lacking
are reasons or will.

jsnyder@june.cs.washington.edu              John R. Snyder
{ihnp4,decvax,ucbvax}!uw-beaver!jsnyder     Dept. of Computer Science, FR-35
                                            University of Washington
206/543-7798                                Seattle, WA 98195

simon@comp.lancs.ac.uk (Simon Brooke) (03/31/88)

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>>What do people think of the PRACTICAL future of artificial intelligence?
>>
>>Is AI just too expensive and too complicated for practical use?  I
>>
>>Does AI have any advantage over conventional programming?  
>
>   Bear with me while I put this into a sociological perspective. The first
>great "age" in mankind's history was the agricultural age, followed by the
>industrial age, and now we are heading into the information age. The author

Oh God! I suppose the advantage of the net is that it allows us to betray
our ignorance in public, now and again. This is 'sociology'? Dear God!

>   For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a 
>design for a car that meets these specifications. 

And here we really do have God - the General Omnicompetent Device - which
can search an infinite space in finite time. (Remember that Deep Thought
took 7 1/2 million years to calculate the answer to the ultimate question
of life, the universe, and everything - and at the end of that time could
not say what the question was).

Seriously, if this is why you are studying AI, throw it in and study some
philosophy. There *are* good reasons for studying AI: some people do it in
order to 'find out how people work' - I have no idea whether this project
is well directed, but it is certain to raise a lot of interesting
problems. Another is to use it as a tool for exploring our understanding
of such concepts as 'understanding', 'knowledge', 'intelligence' - or, in
my case, 'explanation'. Obviously I believe this project is well directed,
and I know it raises lots of of interesting problems...

And occasionally these interesting problems will spin off technologies
which can be applied to real world tasks. But to see AI research as driven
by the need to produce spin-offs seems to me to be turning the whole
enterprise on its head.


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
************************************************************************* 
-- 
** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
************************************************************************* 

hollombe@ttidca.TTI.COM (The Polymath) (04/01/88)

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?

My empoloyers just sponsored a week-long in-house series of seminars,
films, vendor presentations and demonstrations of expert systems
technology.  I attended all of it, so I think I can reasonably respond to
this.

Apparently, the expert systems/knowledge engineering branch of so called
AI (of which, more later) has made great strides in the last few years.
There are many (some vendors claim thousands) of expert system based
commercial applications running in large and small corporations all over
the country.

In the past week we saw presentations by Gold Hill Computers (GOLDWORKS),
Aion Corp. (ADS), Texas Instruments (Personal Consultant Plus) and Neuron
Data (Nexpert Object).  The presentations were impressive, even taking
into account their sales nature.  None of the vendors is in any financial
trouble, to say the least.  All claimed many delivered, working systems.

A speaker from DEC explained that their Vax configurator systems couldn't
have been developed without an expert system (they tried and failed) and
is now one of the oldest and most famous expert systems running.

It was pointed out by some of the speakers that companies using expert
systems tend to keep a low profile about it.  They consider their systems
as company secrets, proprietary information that gives them an edge in
their market.

Personal Impressions:

The single greatest advantage of expert systems seems to be their rapid
prototyping capability.  They can produce a working system in days or
weeks that would require months or years, if it could be done at all, with
conventional languages.  That system can subsequently be modified very
easily and rapidly to meet changing conditions or include new rules as
they're discovered.  Once a given algorithm has stabilized over time, it
can be re-written in a more conventional language, but still accessed by
the expert system.  The point being that the algorithm may never have been
determined at all but for the adaptable rapid prototyping environment.
(The DEC Vax configurator, mentioned above, is an example of this.  Much of
it, but not all, has been converted to conventional languages).

As for expense, prices of systems vary widely, but are coming down.  TI
offers a board with a LISP mainframe-on-a-chip (their term) that will turn
a MAC-II into a LISP machine for as little as $7500.  Other systems went
as high as an order of magnitude over that.  I personally think these
won't really take off 'til the price drops another order of magnitude to
put them in the hands of the average home hacker.

Over all, I'd have to say that expert systems, at least, are alive and
well with a bright future ahead of them.

About Artificial Intelligence:

I maintain this is a contradiction in terms, and likely to be so for the
forseeable future.  If we take "intelligence" to mean more than expert
knowledge of a very narrow domain there's nothing in existence that can
equal the performance of any mammal, let alone a human being.  We're just
begining to explore the types of machine architectures whose great^n-
grandchildren might, someday, be able to support something approaching
true AI.  I'll be quite amazed to see it in my lifetime (but the world has
amazed me before (-: ).

-- 
The Polymath (aka: Jerry Hollombe, hollombe@TTI.COM)   Illegitimati Nil
Citicorp(+)TTI                                           Carborundum
3100 Ocean Park Blvd.   (213) 452-9191, x2483
Santa Monica, CA  90405 {csun|philabs|psivax|trwrb}!ttidca!hollombe

arti@vax1.acs.udel.EDU (Arti Nigam) (04/02/88)

In article <4565@june.cs.washington.edu> you write:
>
>We always have the choice not to develop a technology; what may be lacking
>are reasons or will.

I heard this from one of the greats in computer-hardware-evolution, only
I don't remember his name.  What he said, and I say, is essentially this;
if you are part of an effort towards progress, in whatever field or
domain, you should have some understanding of  WHERE you are going and
WHY you want to get there.

Arti Nigam

gvw@its63b.ed.ac.uk (G Wilson) (04/03/88)

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>         Moreover, your opinion that conventional techniques can
>replace AI is ludicrous.  Consider the area of natural language.  What
>conventional techniques that you know of can extract information from
>natural language text or translate a passage from English to French?

Errmmm...show me *any* program which can do these things?  To date,
AI has been successful in these areas only when used in toy domains.

>The future of AI is going to be full of unrealistic hype and disappointing
>failures. 

Just like its past, and present.  Does anyone think AI would be as prominent
as it is today without (a) the unrealistic expectations of Star Wars,
and (b) America's initial nervousness about the Japanese Fifth Generation
project?

>           But the demand for AI is so great that we have no choice but to
>push on.

Manifest destiny??  A century ago, one could have justified
continued research in phrenology by its popularity.  Judge science
by its results, not its fashionability.

I think AI can be summed up by Terry Winograd's defection.  His
SHRDLU program is still quoted in *every* AI textbook (at least all
the ones I've seen), but he is no longer a believer in the AI
research programme (see "Understanding Computers and Cognition",
by Winograd and Flores). 

Greg Wilson

kbc@mdbs.UUCP (Kevin Castleberry) (04/04/88)

> It should increase the skill of the 
>person doing the job by doing those things which are boring
>or impractical for humans but possible for computers.
>
	.
	.
	.
> When sharing a job
>with a computer which tasks are best automated and which best 
>given to the human - not just which is it possible to automate!
For the most part, this is what I see happening in the truly succesful
ES applications I see implemented.  Occasionally there is one that provides
a solution to a problem so complex that humans did not try.  Most of
the time it is just providing the human a quicker and more reliable way
to get the job done so s/he can move on to more interesting tasks.

>Perhaps computers will free people up so that they can go back
>to doing some of the tasks that we currently have machines do
>- has anyone thought of it that way?  
>
I certainly have observed this.  Often the human starts out doing interesting
designing, problem solving etc., but then gets bogged down in the necessities
of keeping the *system* running.  I have observed such automation giving
humans back the job they enjoy.

>And if we are going to do people out of jobs then we'd better
>start understanding that a person is still valuable even if 
>they do not do "regular work". 
My own belief is if systems aren't developed to help us work smarter
then the jobs will disappear anyway to the company that does develop such
systems.


	support@mdbs.uucp
		or
	{rutgers,ihnp4,decvax,ucbvax}!pur-ee!mdbs!support

	The mdbs BBS can be reached at: (317) 447-6685
	300/1200/2400 baud, 8 bits, 1 stop bit, no parity

Kevin Castleberry (kbc)
Director of Customer Services

Micro Data Base Systems Inc.
P.O. Box 248
Lafayette, IN  47902
(317) 448-6187

For sales call: (800) 344-5832

mrspock@hubcap.UUCP (Steve Benz) (04/04/88)

From article <1134@its63b.ed.ac.uk>, by gvw@its63b.ed.ac.uk (G Wilson):
> In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>>         Moreover, your opinion that conventional techniques can
>>replace AI is ludicrous.  Consider the area of natural language.  What
>>conventional techniques that you know of can extract information from
>>natural language text or translate a passage from English to French?
> 
> Errmmm...show me *any* program which can do these things?  To date,
> AI has been successful in these areas only when used in toy domains.

In a real world (real world at least as far as real money will carry you...)
project here, we developed a nearly-natural-language system that deals
with the "toy domain" of reading mail, querying databases, and some other stuff.

It may be a toy, but some folks were willing to lay out some signifigant
number of dollars to get it.  These applications are based off of
a lazy-evaluation, functional language (I wouldn't call that a "conventional
technique.")

But the best part about the whole thing (as far as our contract monitor is
concerned) is that it really wasn't all that expensive to do--less than
20 man-months went into the development of the language and fitting out
the old menu-driven software with the new technique.  Overall, it was a
highly successful venture, allowing us to create high-quality user-interfaces
very quickly, and develop them semi-independently of the application itself.
None of the "conventional techniques" we had used before allowed us this.

So you see, AI has application, I think the problem is that AI techniques
like expert systems, and functional/logic programming simply haven't
filtered out of the University in sufficient quantity to make an impact on
the marketplace.  The average BS-in-CS-graduate probably has had a very
limited exposure to these techniques, hence he/she will be afraid of the
unknown and will prefer to stick with "conventional techniques."

To say that AI will never catch on is like saying that high-level languages 
should never have cought on.  At one point it looked unlikely that HLL
would gain wide acceptance, better equipment and better understanding by
the programming community made them practical.

					- Steve
					mrspock@hubcap.clemson.edu
					...!gatech!hubcap!mrspock

garyb@hpmwtla.HP.COM (Gary Bringhurst) (04/08/88)

>    Some people wondered what
> was the use of opening up a trans-continental railroad when the pony 
> express could send the same letter or package to where you wanted in just
> seven days....
>
>       Sean Brunnock
>       University of Lowell
>       sbrunnoc@eagle.cs.ulowell.edu

I have to agree with Sean here.  So let's analyze his analogy more closely.
AI is to the railroad as conventional CS wisdom is to the pony express.
Railroads can move mail close to three times faster than ponys, therefore
AI programs perform proportionately better than the alternatives, and are not
sluggish or resource gluttons.  Trains are MUCH larger than ponys, so AI
programs must be larger as well.  Trains travel only in well defined tracks,
while ponys have no such limitations...

Hey, don't trains blow a lot of smoke?

Gary L. Bringhurst

brianc@daedalus (Brian Colfer) (04/11/88)

Douglas Hofsteader says in Godel, Escher, Bach that we are probably 
too dumb to understand ourselves at level to make an intelligence 
comparable to our own.  He uses the analogy of giraffes which just
don't have the bio-hardware to contemplate their own exisitance. 

We too may just not have the bio-hardware to organize a true
intelligence.  Now there are many significant things to be done short
of this goal.  The real question for AI is, "Can there really be an
alternative paradigm to the Turing test which will guide and inspire
the field in significant areas?"

Well...thats my $0.02


===============================================================================
             : UC San Francisco       : brianc@daedalus.ucsf.edu 
Brian Colfer : Dept. of Lab. Medicine : ...!ucbvax!daedalus.ucsf.edu!brianc 
             :  PH. 415-476-2325      : brianc@ucsfcca.bitnet
===============================================================================

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/11/88)

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>   Bear with me while I put this into a sociological perspective. .....
>   The information age will revolutionize agriculture and industry just as
>industry revolutionized agriculture one hundred years ago

Sociologists study the present, not the future. I presume the "Megatrends" books
cited is Toffler style futurology, and this sort of railway journey light 
reading has no connection with rigorous sociology/contemporary anthrolopology.

The only convincing statements about the future which competent sociologists
generally make are related to the likely effects of social policy.  Such 
statements are firmly rooted in a defendible analysis of the present.

This ignorance of the proper practices of historians, anthropologists,
sociologists etc. reinforces my belief that as long as AI research is
conducted in philistine technical vacuums, the whole research area
will just chase one dead end after another.

maddoxt@novavax.UUCP (Thomas Maddox) (04/16/88)

In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:

>I think AI can be summed up by Terry Winograd's defection.  His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores). 
>
	Using this same reasoning, one might given up quantum
mechanics because of Einstein's "defection."  Whether a particular
researcher continues his research is an interesting historical
question (and indeed many physicists lamented the loss of Einstein),
but it does not call into question the research program itself, which
must stand or fall on its own merits.
	AI will continue to produce results and remain a viable
enterprise, or it won't and will degenerate.  However, so long as it
continues to feed powerful ideas and techniques into the various
fields it connects with, to dismiss it seems remarkably premature.  If
you are one of the pro- or anti-AI heavyweights, i.e., someone with
power, prestige, or money riding on society's evaluation of AI
research, then you join the polemic with all guns firing.  
	The rest of us can continue to enjoy both the practical and
intellectual fruits of the research and the debate.  

maddoxt@novavax.UUCP (Thomas Maddox) (04/27/88)

In article <978@crete.cs.glasgow.ac.uk> gilbert@crete.UUCP (Gilbert\ Cockton) writes:
>
>Sociologists study the present, not the future. I presume the "Megatrends" books
>cited is Toffler style futurology, and this sort of railway journey light 
>reading has no connection with rigorous sociology/contemporary anthrolopology.
>
>The only convincing statements about the future which competent sociologists
>generally make are related to the likely effects of social policy.  Such 
>statements are firmly rooted in a defendible analysis of the present.
>
>This ignorance of the proper practices of historians, anthropologists,
>sociologists etc. reinforces my belief that as long as AI research is
>conducted in philistine technical vacuums, the whole research area
>will just chase one dead end after another.

	"Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
ha ha ha ha, &c.  While much work in AI from its inception has
consisted of handwaving and wishful thinking, the field has produced
and continues to produce ideas that are useful.  And some of the most
interesting investigations of topics once dominated by the humanities,
such as theory of mind, are taking place in AI labs.  By comparison,
sociologists produce a great deal of nonsense, and indeed the social
"sciences" in toto are afflicted by conceptual confusion at every
level.  Ideologues, special interest groups, purveyors of outworn
dogma (Marxists, Freudians, et alia) continue to plague the social
sciences in a way that would be almost unimaginable in the sciences,
even in a field as slippery, ill-defined, and protean as AI.  
	So talk about "philistine technical vacuums" if you wish, but
remember that by and large people know which emperor has no clothes.
Also, if you want to say "one dead end after another," you might
adduce actual dead ends pursued by AI research and contrast them
with non-dead ends so that the innocent who stumbles across your
remark won't be utterly misled by your unsupported assertions.   

simon@comp.lancs.ac.uk (Simon Brooke) (04/28/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
(flaming against an article submitted by Gilbert Cockton)

>	"Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
>ha ha ha ha, &c.  

What do the third and subsequent iterations of the symbol 'ha' add to the
meaning of this statement? Are we to assume the author doubts the rigour
of Sociology, or the contemporary nature of anthropology?

>And some of the most interesting investigations of topics once dominated 
>by the humanities, such as theory of mind, are taking place in AI labs.  

This is, of course, true - some of it is. Just as some of the most
interesting advances in Artificial Intelligence take place in Philosophy
and Linguistics departments. This is what one would expect, after all; for
what is AI but an experimental branch of Philosophy?

>sociologists produce a great deal of nonsense, and indeed the social
>"sciences" in toto are afflicted by conceptual confusion at every
>level.  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,

Gosh! Isn't it nice, now and again, to read the words of someone whose
knowledge of a field is so deep and thorough that they can some it up in
one short paragraph!

It is, of course, true that some embarassingly poor work is published in
Sociology, just as in any other discipline; perhaps indeed there is more
poor sociology, simply because sociology is more difficult to do well than
any other type of study - most of the phenomena of sociology occurs in the
interaction between individuals, and this interaction cannot readily be
accessed by an observer who is not party to the interaction. Yet if you
are part of the interaction, it will not proceed as it would with someone
else...

Again, sociological investigation, because it looks at us in a 
rigorous way which we are not used to, often leads to conclusions which 
seem counter-intuitive - they cut through our self-deceits and hypocrisies.
So we prefer to abuse the messenger rather than listen to the message.

For the rest:

He who knows not an knows not he knows not......

A dictum which I will conveniently forget next time I feel like shooting
my mouth off.

** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Thought for today: Most prologs chew everything very slowly anyway,  * 
***just being polite I guess********************************************* 

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/28/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>By comparison, sociologists produce a great deal of nonsense, and indeed the 
>social "sciences" in toto are afflicted by conceptual confusion at every
>level.  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,
>even in a field as slippery, ill-defined, and protean as AI.  
There are more of them :-)  But if you looked at the work of U.K. sociologists
like Townsend and Halsey on Age, Poverty, Health and social mobility, you might
find something less concerned with theory and more with rigorous investigation.

I find the conflict in the humanities and behvioural "sciences" far more healthy
than the uncritical following of fashions of paradigms in science.  Whilst the
former areas encourage an understanding of methodology and epistemology, the
sciences assume their core methods are correct and get on with it.  A lot boils
down to personality (Liam Hudson, Contrary Imaginations).  The reason that 
ideology and methodological pluralism would be unimaginable in the sciences may
have something to do with the nature (and please, not the LACK) of the
scientific imagination compared to the humanist imagination.  Note that
materialism, determinism, statistical inference and positivism are no less
outworn dogmas and ideologies than are Marxism, Freudianism, etc.  My 
experience is that someone from a humanist critical tradition will have a better
understanding of the assumptions behind methodologies than will scientists and
even more so, engineers.  Out of such understandings came the rejection of first
Medieval Catholicism, then Seventeeth Century materialism, Twentieth Century 
Behaviourism and Systems Theory, and now the "pure" AI position.  Assumptions
behind AI are similar to many which have been around since the warm humility of
Renaissance Humanism cooled into the mechanical fascination of the Baroque.

>So talk about "philistine technical vacuums" if you wish, but
>remember that by and large people know which emperor has no clothes.
So who is it who is deciding strategy for most Western social programmes?  
Clothes or no clothes, social administrators have an empire which extends
beyond academia and many of them draw on sociological concepts and results in
their work.  It is in their complete ignorance of socialisation that AI workers
fall down in their study of machine learning.  Most human learning always takes
place in a social context, with only the private interests of marginal 
adolescents and adults taking place in isolation - but here they draw on problem
solving capabilities which were nutured in a social context.  The starkest
examples of the nature and role of primary socialisation come from those few
unfortunate children who had been isolated from birth.  They are savage animals.
If parents had to interact with their children in FOPC or connectionist inputs,
the same would be true, until the children were taken into care.

>Also, if you want to say "one dead end after another," you might adduce actual
>dead ends pursued by AI research and contrast them with non-dead ends.

DEAD ENDS
Computational Lingusitics, continuous speech understanding, intelligent vision,
reliable expert systems which do not require endless maintenance, human
problem solving, the physical symbol system hypothesis, knowledge representation
formalisms using computable models.  Largely areas where some other paradigm
within another discipline can make progress as the lead weight of computability
is not suffocating research.  Generally due to knowledge representation problems
- even the Novel has problems here :-)  If you can't write it in a text-book 
(e.g. clinical diagnosis, teaching techniques, advocacy), you'll never get it 
on a machine - impossible in superset (NL) => inpossible in subset (FOPC, 
computationally denotable/constructable).  A problem in AI is trying to solve 
other people's problems, where those other people know more about the problem 
than you ever will - they live it day in day out.

NON-DEAD ENDS
Much work done under the name of AI is good - low-to-medium level vision,
restricted natural language, knowledge-based programming formalisms,
theorem-proving and highly-constrained technical planning problems.  Indeed,
most technical knowledge, being artificial and symbolic from the outset, is an
obvious candidate for AI modelling and there is nothing in the humanist 
tradition which would doubt the viability of this work.  Here knowledge 
representation is easy, because the domain will generally be so boring (but 
economically/environmentally/security critical) that no-one wants to argue 
about it.  Much technical expertise executed by humans is best suited to 
machines.  In HCI research, sensible work on intelligent (=supportive) user
interfaces is getting somewhere, but then coming up with a computer model of a
computer system is hardly a major challenge in knowledge representation
techniques.  Coming up with a computer model of a user is also possible, as long
as we don't try to model anything controversial, but stick to observable 
behaviour and user-negotiated input.

The main objection to AI is when it claims to approach our humanity.

			It cannot.

tjhorton@csri.toronto.edu (Tim Horton) (04/29/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>...  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,
>even in a field as slippery, ill-defined, and protean as AI.  

I suspect people just haven't run into it.  It's there, but not as strong
in the natural sciences because (I suspect) there are externalized measuring
sticks in most of them on which to depend for evaluations.

My experience has been that such silliness, whether in the natural or social
sciences, is practically always about the status of a paradigm or theory for
which no demonstrable procedure exists for verification or judgement either
way.  (And there's way more backstabbing in the social sciences, as a result).

Such situations *do* occur in the natural sciences!  I'm sure that you too can
name some paradigmatic zealots.

In AI research environments, certain problems are "worth" study, certain
things are allowed, certain things are required, certain approaches are
a priori valid.  The rationale for such biases is generally vacuous, or at
least as moot as can be.  And there's been some heated arguments to highlite
the strength of these biases:  the "procedural versus declarative" debate,
and more recently debates about the relevance of logic, for instance.  This
whole debate, here in this newsgroup, requires that there are unsubstainated
differences of opinion that people are willing to commit themselves to.

The history and philosophy of science, although a social science, is well
worth looking into -- it is, I think, an exception to the "social science is
weak" tendency the article above aluded to.  I doubt that if anyone were to
follow one of the quality expositions of the development of science thru
history, that s/he would still find science such silliness so unimaginable.

(Among my favorites, by the way, are the old chemistry theories of caloric and
phlogistine, which once completely dominated research.)

Current conceptions define the problems, the approaches, the value of a piece
of work, and even what will be seen or imagined.  I find it hard to believe
that AI doesn't have a strong case of this disease right now.

glg@sfsup.UUCP (G.Gleason) (05/04/88)

In article <1053@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>The main objection to AI is when it claims to approach our humanity.

>			It cannot.

That's a pretty strong claim to make without backing it up.

I'm not saying that I disagree with you, and I also object to all the
hype which makes this claim for current AI, or anything that is likely
to come out of current research.  I'm also not saying your claim is
wrong, only that it is unjustified; there is more to learn before we
can really say.

There are new ideas in biology that build upon "systems theory," and
probably can be tied in with the physical symbol systems theory (I
hope I got that right) that suggest that information or "linguistic
interaction" is fundamental to living organisms.

In the May/June issue of "The Sciences," I found an article called
"The Life of Meaning." It was in a regular column (The Information Age).
I won't summarize the whole article, but it does present some compelling
examples, and arguments for extending the language of language to talking
about cellular mechanisms.  One is how cyclic AMP acts as an internal
message in E. coli.  When an E. coli lands in an environment without
food, cyclic AMP binds to the DNA, and switches the cell over to a
"motion" program.  Cyclic AMP in this role has all the attributes of
a symbolic (or linguistic) message: the choice of symbol is arbitrary,
and the "meaning" is context dependant.  This becomes even more clear
with the example of human adrenaline response in liver cells.  The
hormone binds to sites on the outside of the cell which causes an
internal message to be generated, which just happens to be cyclic AMP.
The cell responds to the cyclic AMP (not by a DNA based mechanism as
in E. coli) by producing more glucose.  The composition of the message
has nothing to do with the trigger or the response, it is symbolic.

So, how is this relevant to the original discussion.  I don't see any
fundamental difference between exchanging chemical messages or electronic
ones.  Although this does not imply that configurations of electronic and
electromechanical components that we would call "alive" are possible or
that it is possible to design and build one, it doesn't rule it out, and
more importantly it suggests a fundamental similarity between living
organisms and "information processors."  The only difference is how they
arise.  Possibly an important difference, but we have no way to prove this
now.

Gerry Gleason

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/06/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>	"Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
>ha ha ha ha, &c. [...]  By comparison, sociologists produce a great deal
>of nonsense, and indeed the social "sciences" in toto are afflicted by
>conceptual confusion at every level.  Ideologues, special interest groups,
>purveyors of outworn dogma (Marxists, Freudians, et alia) continue to
>plague the social sciences in a way that would be almost unimaginable in
>the sciences, even in a field as slippery, ill-defined, and protean as AI.

Speaking of outworn dogmas, AI seems to be plagued by behaviorists,
or at least people who seem to think that having the right behavior
is all that is of interest: hence the popularity of the Turing Test.

>Also, if you want to say "one dead end after another," you might
>adduce actual dead ends pursued by AI research and contrast them
>with non-dead ends so that the innocent who stumbles across your
>remark won't be utterly misled by your unsupported assertions.   

Does anyone actually think the current techniques are capable of
producing human-level intelligence just by scaling up?  They are all
likely to be dead ends in that sense though they may well be useful
for something else.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton