[comp.society.futures] Time Magazine -- Computers of the Future

bzs@BU-CS.BU.EDU (Barry Shein) (03/27/88)

The cover article this week for Time Magazine is "Computers of the
Future". Mainly what they mean by that are the two paths
supercomputing and artifical intelligence (eg. neural networking.)

I haven't finished it, it seems fun, some errors of course but you
don't expect extreme technical accuracy from such an article anyhow.

	-Barry Shein, Boston University

klee@daisy.UUCP (Ken Lee) (03/28/88)

In article <8803270154.AA08607@bu-cs.bu.edu> bzs@BU-CS.BU.EDU (Barry Shein) writes:
>
>The cover article this week for Time Magazine is "Computers of the
>Future". Mainly what they mean by that are the two paths
>supercomputing and artifical intelligence (eg. neural networking.)
>
>I haven't finished it, it seems fun, some errors of course but you
>don't expect extreme technical accuracy from such an article anyhow.

What do people think of the PRACTICAL future of artificial intelligence?
For a while, it seemed like it was going to take off.  All sorts of
expert systems and tools houses started to appear.  Most of these are
bankrupt now.  Even the biggies like Symbolics, Teknowledge, and
IntelliCorp are having major trouble.  The only companies that are
successful are Star Wars contractors, and I'm not sure if that's what
you'd call a practical application.

Is AI just too expensive and too complicated for practical use?  I
spent 3 years in the field and I'm beginning to think the answer is
mostly yes.  In my opinion, all working AI programs are either toys or
could have been developed much more cheaply using conventional
techniques.

Why is AI expensive?  No matter how good our theoretical inference or
representation techniques get, we still have the practical problem of
extracting human knowledge transfering it to the machine.  The problem
I see is that every application is sufficiently unique to preclude
automated knowledge engineering.  Since you must solve it by hand
anyway, you could probably more effiently program it using conventional
techniques.

Does AI have any advantage over conventional programming?  There are
claims about the benefits of learning systems, meta-level algorithms,
abstraction, etc., but for these to be cost effective, they must be
developed in an application-independent fashion.  Is this possible?
I don't think so, at least not in the near future.

I'm probably being short-sighted and ignoring the (very) long-term
possibilities.  There are also side-effects, like the popularizing of
object-oriented programming and powerful programming environments,
that are very practical.  Maybe I'm just looking at AI too broadly and
not at specific application areas.

What do you thing?  Thanks for your thoughts.  No, I'm not an AI trying
to clone you.

Ken
-- 
What's the difference between a used car salesman and a computer salesman?
The used car salesman knows when he's lying.

sbrunnoc@eagle.ulowell.edu (Sean Brunnock) (03/28/88)

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?
>
>Is AI just too expensive and too complicated for practical use?  I
>
>Does AI have any advantage over conventional programming?  

   Bear with me while I put this into a sociological perspective. The first
great "age" in mankind's history was the agricultural age, followed by the
industrial age, and now we are heading into the information age. The author
of "Megatrends" points out the large rise in the number of clerks as
evidence of this. 

   The information age will revolutionize agriculture and industry just as
industry revolutionized agriculture one hundred years ago. Industry gave to
the farmer the reaper, cotton gin, and a myriad of other products which
made his job easier. Food production went up an order of magnitude and by
the law of supply and demand, food became less valuable and farming became
less profitable.

   The industrial age was characterized by machines that took a lot of
manual labor out of the hands of people. The information age will be
charcterized by machines that will take over mental tasks now accomplished
by people.
   
   For example, give a machine access to knowledge of aerodynamics,
engines, materials, etc. Now tell this machine that you want it to
design a car that can go this fast, use this much fuel per mile, cost
this much to make, etc. The machine thinks about it and out pops a 
design for a car that meets these specifications. It would be the
ultimate car with no room for improvement (unless some new scientific
discovery was made) because the machine looks at all of the possibilities.
These are the types of machines that I expect AI to make possible
in the future.

   I know this is an amateurish analysis, but it convinces me to study
AI.

   As for using AI in conventional programs? Some people wondered what
was the use of opening up a trans-continental railroad when the pony 
express could send the same letter or package to where you wanted in just
seven days. AI may be impractical now, but we have to keep making an effort
at it.


       Sean Brunnock
       University of Lowell
       sbrunnoc@eagle.cs.ulowell.edu

andrew@trlamct.OZ.AU (Andrew Jennings) (03/29/88)

I managed to read all of this. As fate would have it I was stranded all morning
at an airport waiting for a plane, and the issue of Time was on the newstands.

I guess you have to allow for a fair bit of hype in these things. Even so the
article on AI seemed fairly over-optimistic to me. Sure there are many "second-
wave" applications of expert systems out there, but there are also a lot of 
research issues open. 

I'd recommend the articles, and the picture of DeKleer's socks is also worthwile.





(Postmaster:- This mail has been acknowledged.)

jbn@glacier.STANFORD.EDU (John B. Nagle) (03/29/88)

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>industrial age, and now we are heading into the information age. The author
>of "Megatrends" points out the large rise in the number of clerks as
>evidence of this. 

      The number of office workers in the U.S. peaked in 1985-86 and has 
declined somewhat since then.  White collar employment by the Fortune 500
is down substantially over the last five years.  The commercial real estate
industry has been slow to pick up on this, which is why there are so many
new but empty office buildings.  The new trend is back toward manufacturing.
You can't export services, except in a very minor way.  (Check the numbers
on this; they've been published in various of the business magazines and
can be obtained from the Department of Commerce.)

>   For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a 
>design for a car that meets these specifications. It would be the
>ultimate car with no room for improvement (unless some new scientific
>discovery was made) because the machine looks at all of the possibilities.

      Wrong.  Study some combinatorics.  Exhaustive search on a problem like
that is hopeless.  The protons would decay first.

					John Nagle

doug@isishq.UUCP (Doug Thompson) (03/29/88)

 UN> 
 UN>What do you think?  Thanks for your thoughts.  No, I'm not an 
 UN>AI trying 
 UN>to clone you. 
 UN> 
 
Hmmmm. I think intelligence is not nearly well enough understood for the 
process of artificializing to go anywhere. Most proponents of 
aritificial intelligence don't understand human intellignece. Most 
Computer scientists who do understand human human intelligence think AI 
is nonsense. Now I've presuming that *I* know what I mean when I say 
"understand". But so far those who say our minds are too complex to 
mechanize have been proven right, and those who say "our minds are 
really simple" have yet to suceed in meaningful mechanization of them. 
 
You can't act like a human being unless you think you are a human being 
and are treated like a human being and act like a human being. It goes 
in circles. But so far our machines do not resemble human beings much at 
all.  
 
 
------------------------------------------------------------------------ 
Fido      1:221/162 -- 1:221/0                         280 Phillip St.,   
UUCP:     !watmath!isishq!doug                         Unit B-3-11 
                                                       Waterloo, Ontario 
Bitnet:   fido@water                                   Canada  N2L 3X1 
Internet: doug@isishq.math.waterloo.edu                (519) 746-5022 
------------------------------------------------------------------------ 
 
 
  
 
  

---
 * Origin: ISIS International H.Q. (II) (Opus 1:221/162)
SEEN-BY: 221/162

rwojcik@bcsaic.UUCP (Rick Wojcik) (03/30/88)

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>
>Is AI just too expensive and too complicated for practical use?  I
>spent 3 years in the field and I'm beginning to think the answer is
>mostly yes.  In my opinion, all working AI programs are either toys or
>could have been developed much more cheaply using conventional
>techniques.
>
Your posting was clearly intended to provoke, but I'll try to keep the
flames low :-).  Please try to remember that AI is a vast subject area.
It is expensive because it requires a great deal of expertise in language,
psychology, philosophy, etc.--not just programming skills.  It is also a
very high risk area, as anyone can see.  But the payoff can be
tremendous.  Moreover, your opinion that conventional techniques can
replace AI is ludicrous.  Consider the area of natural language.  What
conventional techniques that you know of can extract information from
natural language text or translate a passage from English to French?
Maybe you believe that we should stop all research on robotics.  If not,
would you like to explain how conventional programming can be used get
robots to see objects in the real world?  But maybe we should give up on
the whole idea.  We can replace robots with humans.  Would you like to
volunteer for the bomb squad :-)?  In the development stage, AI is expensive,
but in the long term it is cost effective.  Your pessimism about the field
seems to be based on the failure of expert systems to live up to the hype.
The future of AI is going to be full of unrealistic hype and disappointing
failures.  But the demand for AI is so great that we have no choice but to
push on.
-- 
Rick Wojcik   csnet:  rwojcik@boeing.com	   
              uucp:  {uw-june  uw-beaver!ssc-vax}!bcsaic!rwojcik 
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844

jsnyder@june.cs.washington.edu (John Snyder) (03/31/88)

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>...  But the demand for AI is so great that we have no choice but to
>push on.

We always have the choice not to develop a technology; what may be lacking
are reasons or will.

jsnyder@june.cs.washington.edu              John R. Snyder
{ihnp4,decvax,ucbvax}!uw-beaver!jsnyder     Dept. of Computer Science, FR-35
                                            University of Washington
206/543-7798                                Seattle, WA 98195

simon@comp.lancs.ac.uk (Simon Brooke) (03/31/88)

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>>What do people think of the PRACTICAL future of artificial intelligence?
>>
>>Is AI just too expensive and too complicated for practical use?  I
>>
>>Does AI have any advantage over conventional programming?  
>
>   Bear with me while I put this into a sociological perspective. The first
>great "age" in mankind's history was the agricultural age, followed by the
>industrial age, and now we are heading into the information age. The author

Oh God! I suppose the advantage of the net is that it allows us to betray
our ignorance in public, now and again. This is 'sociology'? Dear God!

>   For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a 
>design for a car that meets these specifications. 

And here we really do have God - the General Omnicompetent Device - which
can search an infinite space in finite time. (Remember that Deep Thought
took 7 1/2 million years to calculate the answer to the ultimate question
of life, the universe, and everything - and at the end of that time could
not say what the question was).

Seriously, if this is why you are studying AI, throw it in and study some
philosophy. There *are* good reasons for studying AI: some people do it in
order to 'find out how people work' - I have no idea whether this project
is well directed, but it is certain to raise a lot of interesting
problems. Another is to use it as a tool for exploring our understanding
of such concepts as 'understanding', 'knowledge', 'intelligence' - or, in
my case, 'explanation'. Obviously I believe this project is well directed,
and I know it raises lots of of interesting problems...

And occasionally these interesting problems will spin off technologies
which can be applied to real world tasks. But to see AI research as driven
by the need to produce spin-offs seems to me to be turning the whole
enterprise on its head.


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
************************************************************************* 
-- 
** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
************************************************************************* 

hollombe@ttidca.TTI.COM (The Polymath) (04/01/88)

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?

My empoloyers just sponsored a week-long in-house series of seminars,
films, vendor presentations and demonstrations of expert systems
technology.  I attended all of it, so I think I can reasonably respond to
this.

Apparently, the expert systems/knowledge engineering branch of so called
AI (of which, more later) has made great strides in the last few years.
There are many (some vendors claim thousands) of expert system based
commercial applications running in large and small corporations all over
the country.

In the past week we saw presentations by Gold Hill Computers (GOLDWORKS),
Aion Corp. (ADS), Texas Instruments (Personal Consultant Plus) and Neuron
Data (Nexpert Object).  The presentations were impressive, even taking
into account their sales nature.  None of the vendors is in any financial
trouble, to say the least.  All claimed many delivered, working systems.

A speaker from DEC explained that their Vax configurator systems couldn't
have been developed without an expert system (they tried and failed) and
is now one of the oldest and most famous expert systems running.

It was pointed out by some of the speakers that companies using expert
systems tend to keep a low profile about it.  They consider their systems
as company secrets, proprietary information that gives them an edge in
their market.

Personal Impressions:

The single greatest advantage of expert systems seems to be their rapid
prototyping capability.  They can produce a working system in days or
weeks that would require months or years, if it could be done at all, with
conventional languages.  That system can subsequently be modified very
easily and rapidly to meet changing conditions or include new rules as
they're discovered.  Once a given algorithm has stabilized over time, it
can be re-written in a more conventional language, but still accessed by
the expert system.  The point being that the algorithm may never have been
determined at all but for the adaptable rapid prototyping environment.
(The DEC Vax configurator, mentioned above, is an example of this.  Much of
it, but not all, has been converted to conventional languages).

As for expense, prices of systems vary widely, but are coming down.  TI
offers a board with a LISP mainframe-on-a-chip (their term) that will turn
a MAC-II into a LISP machine for as little as $7500.  Other systems went
as high as an order of magnitude over that.  I personally think these
won't really take off 'til the price drops another order of magnitude to
put them in the hands of the average home hacker.

Over all, I'd have to say that expert systems, at least, are alive and
well with a bright future ahead of them.

About Artificial Intelligence:

I maintain this is a contradiction in terms, and likely to be so for the
forseeable future.  If we take "intelligence" to mean more than expert
knowledge of a very narrow domain there's nothing in existence that can
equal the performance of any mammal, let alone a human being.  We're just
begining to explore the types of machine architectures whose great^n-
grandchildren might, someday, be able to support something approaching
true AI.  I'll be quite amazed to see it in my lifetime (but the world has
amazed me before (-: ).

-- 
The Polymath (aka: Jerry Hollombe, hollombe@TTI.COM)   Illegitimati Nil
Citicorp(+)TTI                                           Carborundum
3100 Ocean Park Blvd.   (213) 452-9191, x2483
Santa Monica, CA  90405 {csun|philabs|psivax|trwrb}!ttidca!hollombe

arti@vax1.acs.udel.EDU (Arti Nigam) (04/02/88)

In article <4565@june.cs.washington.edu> you write:
>
>We always have the choice not to develop a technology; what may be lacking
>are reasons or will.

I heard this from one of the greats in computer-hardware-evolution, only
I don't remember his name.  What he said, and I say, is essentially this;
if you are part of an effort towards progress, in whatever field or
domain, you should have some understanding of  WHERE you are going and
WHY you want to get there.

Arti Nigam

gvw@its63b.ed.ac.uk (G Wilson) (04/03/88)

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>         Moreover, your opinion that conventional techniques can
>replace AI is ludicrous.  Consider the area of natural language.  What
>conventional techniques that you know of can extract information from
>natural language text or translate a passage from English to French?

Errmmm...show me *any* program which can do these things?  To date,
AI has been successful in these areas only when used in toy domains.

>The future of AI is going to be full of unrealistic hype and disappointing
>failures. 

Just like its past, and present.  Does anyone think AI would be as prominent
as it is today without (a) the unrealistic expectations of Star Wars,
and (b) America's initial nervousness about the Japanese Fifth Generation
project?

>           But the demand for AI is so great that we have no choice but to
>push on.

Manifest destiny??  A century ago, one could have justified
continued research in phrenology by its popularity.  Judge science
by its results, not its fashionability.

I think AI can be summed up by Terry Winograd's defection.  His
SHRDLU program is still quoted in *every* AI textbook (at least all
the ones I've seen), but he is no longer a believer in the AI
research programme (see "Understanding Computers and Cognition",
by Winograd and Flores). 

Greg Wilson

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/11/88)

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>   Bear with me while I put this into a sociological perspective. .....
>   The information age will revolutionize agriculture and industry just as
>industry revolutionized agriculture one hundred years ago

Sociologists study the present, not the future. I presume the "Megatrends" books
cited is Toffler style futurology, and this sort of railway journey light 
reading has no connection with rigorous sociology/contemporary anthrolopology.

The only convincing statements about the future which competent sociologists
generally make are related to the likely effects of social policy.  Such 
statements are firmly rooted in a defendible analysis of the present.

This ignorance of the proper practices of historians, anthropologists,
sociologists etc. reinforces my belief that as long as AI research is
conducted in philistine technical vacuums, the whole research area
will just chase one dead end after another.

maddoxt@novavax.UUCP (Thomas Maddox) (04/16/88)

In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:

>I think AI can be summed up by Terry Winograd's defection.  His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores). 
>
	Using this same reasoning, one might given up quantum
mechanics because of Einstein's "defection."  Whether a particular
researcher continues his research is an interesting historical
question (and indeed many physicists lamented the loss of Einstein),
but it does not call into question the research program itself, which
must stand or fall on its own merits.
	AI will continue to produce results and remain a viable
enterprise, or it won't and will degenerate.  However, so long as it
continues to feed powerful ideas and techniques into the various
fields it connects with, to dismiss it seems remarkably premature.  If
you are one of the pro- or anti-AI heavyweights, i.e., someone with
power, prestige, or money riding on society's evaluation of AI
research, then you join the polemic with all guns firing.  
	The rest of us can continue to enjoy both the practical and
intellectual fruits of the research and the debate.  

maddoxt@novavax.UUCP (Thomas Maddox) (04/27/88)

In article <978@crete.cs.glasgow.ac.uk> gilbert@crete.UUCP (Gilbert\ Cockton) writes:
>
>Sociologists study the present, not the future. I presume the "Megatrends" books
>cited is Toffler style futurology, and this sort of railway journey light 
>reading has no connection with rigorous sociology/contemporary anthrolopology.
>
>The only convincing statements about the future which competent sociologists
>generally make are related to the likely effects of social policy.  Such 
>statements are firmly rooted in a defendible analysis of the present.
>
>This ignorance of the proper practices of historians, anthropologists,
>sociologists etc. reinforces my belief that as long as AI research is
>conducted in philistine technical vacuums, the whole research area
>will just chase one dead end after another.

	"Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
ha ha ha ha, &c.  While much work in AI from its inception has
consisted of handwaving and wishful thinking, the field has produced
and continues to produce ideas that are useful.  And some of the most
interesting investigations of topics once dominated by the humanities,
such as theory of mind, are taking place in AI labs.  By comparison,
sociologists produce a great deal of nonsense, and indeed the social
"sciences" in toto are afflicted by conceptual confusion at every
level.  Ideologues, special interest groups, purveyors of outworn
dogma (Marxists, Freudians, et alia) continue to plague the social
sciences in a way that would be almost unimaginable in the sciences,
even in a field as slippery, ill-defined, and protean as AI.  
	So talk about "philistine technical vacuums" if you wish, but
remember that by and large people know which emperor has no clothes.
Also, if you want to say "one dead end after another," you might
adduce actual dead ends pursued by AI research and contrast them
with non-dead ends so that the innocent who stumbles across your
remark won't be utterly misled by your unsupported assertions.   

simon@comp.lancs.ac.uk (Simon Brooke) (04/28/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
(flaming against an article submitted by Gilbert Cockton)

>	"Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
>ha ha ha ha, &c.  

What do the third and subsequent iterations of the symbol 'ha' add to the
meaning of this statement? Are we to assume the author doubts the rigour
of Sociology, or the contemporary nature of anthropology?

>And some of the most interesting investigations of topics once dominated 
>by the humanities, such as theory of mind, are taking place in AI labs.  

This is, of course, true - some of it is. Just as some of the most
interesting advances in Artificial Intelligence take place in Philosophy
and Linguistics departments. This is what one would expect, after all; for
what is AI but an experimental branch of Philosophy?

>sociologists produce a great deal of nonsense, and indeed the social
>"sciences" in toto are afflicted by conceptual confusion at every
>level.  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,

Gosh! Isn't it nice, now and again, to read the words of someone whose
knowledge of a field is so deep and thorough that they can some it up in
one short paragraph!

It is, of course, true that some embarassingly poor work is published in
Sociology, just as in any other discipline; perhaps indeed there is more
poor sociology, simply because sociology is more difficult to do well than
any other type of study - most of the phenomena of sociology occurs in the
interaction between individuals, and this interaction cannot readily be
accessed by an observer who is not party to the interaction. Yet if you
are part of the interaction, it will not proceed as it would with someone
else...

Again, sociological investigation, because it looks at us in a 
rigorous way which we are not used to, often leads to conclusions which 
seem counter-intuitive - they cut through our self-deceits and hypocrisies.
So we prefer to abuse the messenger rather than listen to the message.

For the rest:

He who knows not an knows not he knows not......

A dictum which I will conveniently forget next time I feel like shooting
my mouth off.

** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Thought for today: Most prologs chew everything very slowly anyway,  * 
***just being polite I guess********************************************* 

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/28/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>By comparison, sociologists produce a great deal of nonsense, and indeed the 
>social "sciences" in toto are afflicted by conceptual confusion at every
>level.  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,
>even in a field as slippery, ill-defined, and protean as AI.  
There are more of them :-)  But if you looked at the work of U.K. sociologists
like Townsend and Halsey on Age, Poverty, Health and social mobility, you might
find something less concerned with theory and more with rigorous investigation.

I find the conflict in the humanities and behvioural "sciences" far more healthy
than the uncritical following of fashions of paradigms in science.  Whilst the
former areas encourage an understanding of methodology and epistemology, the
sciences assume their core methods are correct and get on with it.  A lot boils
down to personality (Liam Hudson, Contrary Imaginations).  The reason that 
ideology and methodological pluralism would be unimaginable in the sciences may
have something to do with the nature (and please, not the LACK) of the
scientific imagination compared to the humanist imagination.  Note that
materialism, determinism, statistical inference and positivism are no less
outworn dogmas and ideologies than are Marxism, Freudianism, etc.  My 
experience is that someone from a humanist critical tradition will have a better
understanding of the assumptions behind methodologies than will scientists and
even more so, engineers.  Out of such understandings came the rejection of first
Medieval Catholicism, then Seventeeth Century materialism, Twentieth Century 
Behaviourism and Systems Theory, and now the "pure" AI position.  Assumptions
behind AI are similar to many which have been around since the warm humility of
Renaissance Humanism cooled into the mechanical fascination of the Baroque.

>So talk about "philistine technical vacuums" if you wish, but
>remember that by and large people know which emperor has no clothes.
So who is it who is deciding strategy for most Western social programmes?  
Clothes or no clothes, social administrators have an empire which extends
beyond academia and many of them draw on sociological concepts and results in
their work.  It is in their complete ignorance of socialisation that AI workers
fall down in their study of machine learning.  Most human learning always takes
place in a social context, with only the private interests of marginal 
adolescents and adults taking place in isolation - but here they draw on problem
solving capabilities which were nutured in a social context.  The starkest
examples of the nature and role of primary socialisation come from those few
unfortunate children who had been isolated from birth.  They are savage animals.
If parents had to interact with their children in FOPC or connectionist inputs,
the same would be true, until the children were taken into care.

>Also, if you want to say "one dead end after another," you might adduce actual
>dead ends pursued by AI research and contrast them with non-dead ends.

DEAD ENDS
Computational Lingusitics, continuous speech understanding, intelligent vision,
reliable expert systems which do not require endless maintenance, human
problem solving, the physical symbol system hypothesis, knowledge representation
formalisms using computable models.  Largely areas where some other paradigm
within another discipline can make progress as the lead weight of computability
is not suffocating research.  Generally due to knowledge representation problems
- even the Novel has problems here :-)  If you can't write it in a text-book 
(e.g. clinical diagnosis, teaching techniques, advocacy), you'll never get it 
on a machine - impossible in superset (NL) => inpossible in subset (FOPC, 
computationally denotable/constructable).  A problem in AI is trying to solve 
other people's problems, where those other people know more about the problem 
than you ever will - they live it day in day out.

NON-DEAD ENDS
Much work done under the name of AI is good - low-to-medium level vision,
restricted natural language, knowledge-based programming formalisms,
theorem-proving and highly-constrained technical planning problems.  Indeed,
most technical knowledge, being artificial and symbolic from the outset, is an
obvious candidate for AI modelling and there is nothing in the humanist 
tradition which would doubt the viability of this work.  Here knowledge 
representation is easy, because the domain will generally be so boring (but 
economically/environmentally/security critical) that no-one wants to argue 
about it.  Much technical expertise executed by humans is best suited to 
machines.  In HCI research, sensible work on intelligent (=supportive) user
interfaces is getting somewhere, but then coming up with a computer model of a
computer system is hardly a major challenge in knowledge representation
techniques.  Coming up with a computer model of a user is also possible, as long
as we don't try to model anything controversial, but stick to observable 
behaviour and user-negotiated input.

The main objection to AI is when it claims to approach our humanity.

			It cannot.

tjhorton@csri.toronto.edu (Tim Horton) (04/29/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>...  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,
>even in a field as slippery, ill-defined, and protean as AI.  

I suspect people just haven't run into it.  It's there, but not as strong
in the natural sciences because (I suspect) there are externalized measuring
sticks in most of them on which to depend for evaluations.

My experience has been that such silliness, whether in the natural or social
sciences, is practically always about the status of a paradigm or theory for
which no demonstrable procedure exists for verification or judgement either
way.  (And there's way more backstabbing in the social sciences, as a result).

Such situations *do* occur in the natural sciences!  I'm sure that you too can
name some paradigmatic zealots.

In AI research environments, certain problems are "worth" study, certain
things are allowed, certain things are required, certain approaches are
a priori valid.  The rationale for such biases is generally vacuous, or at
least as moot as can be.  And there's been some heated arguments to highlite
the strength of these biases:  the "procedural versus declarative" debate,
and more recently debates about the relevance of logic, for instance.  This
whole debate, here in this newsgroup, requires that there are unsubstainated
differences of opinion that people are willing to commit themselves to.

The history and philosophy of science, although a social science, is well
worth looking into -- it is, I think, an exception to the "social science is
weak" tendency the article above aluded to.  I doubt that if anyone were to
follow one of the quality expositions of the development of science thru
history, that s/he would still find science such silliness so unimaginable.

(Among my favorites, by the way, are the old chemistry theories of caloric and
phlogistine, which once completely dominated research.)

Current conceptions define the problems, the approaches, the value of a piece
of work, and even what will be seen or imagined.  I find it hard to believe
that AI doesn't have a strong case of this disease right now.

glg@sfsup.UUCP (G.Gleason) (05/04/88)

In article <1053@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>The main objection to AI is when it claims to approach our humanity.

>			It cannot.

That's a pretty strong claim to make without backing it up.

I'm not saying that I disagree with you, and I also object to all the
hype which makes this claim for current AI, or anything that is likely
to come out of current research.  I'm also not saying your claim is
wrong, only that it is unjustified; there is more to learn before we
can really say.

There are new ideas in biology that build upon "systems theory," and
probably can be tied in with the physical symbol systems theory (I
hope I got that right) that suggest that information or "linguistic
interaction" is fundamental to living organisms.

In the May/June issue of "The Sciences," I found an article called
"The Life of Meaning." It was in a regular column (The Information Age).
I won't summarize the whole article, but it does present some compelling
examples, and arguments for extending the language of language to talking
about cellular mechanisms.  One is how cyclic AMP acts as an internal
message in E. coli.  When an E. coli lands in an environment without
food, cyclic AMP binds to the DNA, and switches the cell over to a
"motion" program.  Cyclic AMP in this role has all the attributes of
a symbolic (or linguistic) message: the choice of symbol is arbitrary,
and the "meaning" is context dependant.  This becomes even more clear
with the example of human adrenaline response in liver cells.  The
hormone binds to sites on the outside of the cell which causes an
internal message to be generated, which just happens to be cyclic AMP.
The cell responds to the cyclic AMP (not by a DNA based mechanism as
in E. coli) by producing more glucose.  The composition of the message
has nothing to do with the trigger or the response, it is symbolic.

So, how is this relevant to the original discussion.  I don't see any
fundamental difference between exchanging chemical messages or electronic
ones.  Although this does not imply that configurations of electronic and
electromechanical components that we would call "alive" are possible or
that it is possible to design and build one, it doesn't rule it out, and
more importantly it suggests a fundamental similarity between living
organisms and "information processors."  The only difference is how they
arise.  Possibly an important difference, but we have no way to prove this
now.

Gerry Gleason

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/06/88)

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>	"Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
>ha ha ha ha, &c. [...]  By comparison, sociologists produce a great deal
>of nonsense, and indeed the social "sciences" in toto are afflicted by
>conceptual confusion at every level.  Ideologues, special interest groups,
>purveyors of outworn dogma (Marxists, Freudians, et alia) continue to
>plague the social sciences in a way that would be almost unimaginable in
>the sciences, even in a field as slippery, ill-defined, and protean as AI.

Speaking of outworn dogmas, AI seems to be plagued by behaviorists,
or at least people who seem to think that having the right behavior
is all that is of interest: hence the popularity of the Turing Test.

>Also, if you want to say "one dead end after another," you might
>adduce actual dead ends pursued by AI research and contrast them
>with non-dead ends so that the innocent who stumbles across your
>remark won't be utterly misled by your unsupported assertions.   

Does anyone actually think the current techniques are capable of
producing human-level intelligence just by scaling up?  They are all
likely to be dead ends in that sense though they may well be useful
for something else.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

bzs@BU-CS.BU.EDU (Barry Shein) (05/07/88)

Re: the social sciences, AI etc...

The important event that has happened in psychology in the past twenty
or so years is the movement from a descriptive model (eg. poke a brain
with a stimulus like a question or a light to react to, record the
result, try to fit it into a statistical model and relate the repeated
results to other variables) towards a functional model (try to build a
machine which exhibits the same behavior as a mind on the assumption
that this can provide insight into how the mind must work.)

In many ways I think this is critical to psychology becoming a true
science, probably an engineering science as well. It was the movement
from observing it externally to the development of models. Just as
physics went from sitting and watching things move and developing
stories about why they might do that to producing mathematical and
other models which predict and model the behavior observed.

At some point we might be able to make such a paradigm shift in the
other social sciences. I don't know whether or not it is critical to
view something like a society as the sum of its individual minds and,
thus, you must first understand the mind to understand the interaction
of many minds.

For example, one did not need to know atomic physics to write down a
useful theory of mechanics. It has been helpful to grind out the noise
and make more accurate models (eg. molecular models of friction no
doubt make our theories of mechanics more accurate, but they were
hardly necessary to basically understand the principals of a ball
rolling down a hill.)

Although there is little doubt that our social sciences are in their
fetal stages (ie. their methodologies probably have to undergo radical
shifts) I believe that by being able to use computers to build
functioning models to study we may be getting a glimpse of what that
future methodology will have to be.

Simulation is the new mathematics of science. Computers are its
pencil and paper.

	-Barry Shein, Boston University

doug@isishq.UUCP (Doug Thompson) (06/01/88)

 
 BS>From: bzs@BU-CS.BU.EDU (Barry Shein) 
 BS> 
 BS>Re: the social sciences, AI etc... 
 BS> 
 BS>The important event that has happened in psychology in the past twenty 
 BS>or so years is the movement from a descriptive model (eg. poke a brain 
 BS>with a stimulus like a question or a light to react to, record the 
 BS>result, try to fit it into a statistical model and relate the repeated 
 BS>results to other variables) towards a functional model (try to build a 
 BS>machine which exhibits the same behavior as a mind on the assumption 
 BS>that this can provide insight into how the mind must work.) 
 BS> 
 BS>In many ways I think this is critical to psychology becoming a true 
 BS>science, probably an engineering science as well. It was the movement 
 BS>from observing it externally to the development of models. Just as 
 BS>physics went from sitting and watching things move and developing 
 BS>stories about why they might do that to producing mathematical and 
 BS>other models which predict and model the behavior observed. 
 BS> 
 
I wonder. I actually wonder if the human psyche (the subject of 
psychology) can actually be dealt with "scientifically" or 
mathematically at all. First, science and mathematics presume 
repeatability and predictability. The ancient idea of human free will 
appears to be at odds with both. Humans appear to be unpredictable. 
Second, think about how you might scientifically or mathematically 
analyse why John Doe is a republican and Jane Doe is a Democrat. 
 
True, if you do a statistical analysis you can predict within a margin 
of error that so and so will be one or the other, but then try to take 
this into an AI model, and ask a machine to decide which is "best" or 
which is "right", the democratic party or the republican party. Try to 
replicate the human decision-making process at the ballot-box. 
 
AI wants to build machines that can perform tasks or make decisions as 
well as humans. I think though, that human reason and decision making is 
not mechanical, it is a-mathematical, a-scientific and a-rational. It is 
hinged to something else, human values (variable), passion (wholly 
subjective), emotions (volatile) and sympathies (unpredictable). 
 
Can you give a machine values, passion, emotions, and sympathies?? 
Maybe, but whose values, whose passions? Yours? Mine? Adolph Hitler's? 
All are "human". 
 
There is something else in the playing field of human sympathies which 
science has not even begun to get a handle on, I think. Machine 
intelligence is still nothing more than a human artifact, something 
created and not instrinsically creative. My hunch is that human thought 
is really dependent on dimensions of the universe which science (as we 
currently understand it) is not yet capable of fathoming. 
 
How can you apply math or science to such things as Faith, Spiritual 
sensibility, relgious experience, love or  hatred? 
 
I think that science cannot begin to explain the forces which move a 
person to believe or have faith. We can do some statistics on some of 
them, but I think we shall never be able to build a computer like Martin 
Luther, or Jesus Christ, or Moses.  
 
Science can do very well with the natural world, but I suspect there is 
a part of the human being which is strongly connected to a super-natural 
reality which science has yet to get a grip on. 
 
 
------------------------------------------------------------------------ 
Fido      1:221/162 -- 1:221/0                         280 Phillip St.,   
UUCP:     !watmath!isishq!doug                         Unit B-3-11 
                                                       Waterloo, Ontario 
Bitnet:   fido@water                                   Canada  N2L 3X1 
Internet: doug@isishq.math.waterloo.edu                (519) 746-5022 
------------------------------------------------------------------------ 
 

tws@beach.cis.ufl.edu (Thomas Sarver) (06/11/88)

In article <48.22A3B84F@isishq.UUCP> doug@isishq.UUCP (Doug Thompson) writes:
|
|I wonder. I actually wonder if the human psyche (the subject of 
|psychology) can actually be dealt with "scientifically" or 
|mathematically at all. First, science and mathematics presume 
|repeatability and predictability. The ancient idea of human free will 
|appears to be at odds with both. Humans appear to be unpredictable. 
|
| [...]
| 
|Can you give a machine values, passion, emotions, and sympathies?? 
|Maybe, but whose values, whose passions? Yours? Mine? Adolph Hitler's? 
|All are "human". 
| 
| [...]
| 
|How can you apply math or science to such things as Faith, Spiritual 
|sensibility, relgious experience, love or  hatred? 
| 
|I think that science cannot begin to explain the forces which move a 
|person to believe or have faith. We can do some statistics on some of 
|them, but I think we shall never be able to build a computer like Martin 
|Luther, or Jesus Christ, or Moses.  
| 
#Science can do very well with the natural world, but I suspect there is 
#a part of the human being which is strongly connected to a super-natural 
#reality which science has yet to get a grip on. 
| 
| 
|Bitnet:   fido@water                                   Canada  N2L 3X1 
|Internet: doug@isishq.math.waterloo.edu                (519) 746-5022 

Whoa!  I can't help noticing that this is wishful thinking.  Anyone who's
studied History of Philosophy knows that Man's role in the universe is
being diminished as technology advances.  With each advance in technology,
Man has found something that makes him less "special."  A great example
was the Earth was found to rotate the Sun.  Suddenly the Earth is part
of a huge Universe that doesn't completely depend on the Earth, home of Man.

As machines can do more things that Man can do, Man has to try to retain that
sense of "special"ness by saying things like "A machine can never do X" or
"A machine will never have X property."  Ergo, Man is constantly rebelling
against the idea that He can be replaced by a machine.

I personally believe that we won't be able to truly create a machine replicate
of ourselves simply because no entity can artificially create another entity of
equal or greater complexity than itself.

I know, this could be another set of wishful thinking.  But you and I won't be
around to see whether we truly can.  Maybe we can create a man-machine hybrid
that contains the best of both man and machine.  There are also ethical
questions:  what to do about an intelligent machine with free will?

Basically, it starts getting really sticky after we stop talking about
_simulating_ human activity and _replicating_ it.  I have some ethical problems
of my own that remain unresolved in regard to whether it is moral to do
research towards artificial intelligence at all.  Are we building the slave
owners of future generators?

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
But hey, its the best country in the world!
Thomas W. Sarver

"The complexity of a system is proportional to the factorial of its atoms.  One
can only hope to minimize the complexity of the micro-system in which one
finds oneself."
	-TWS

Addendum: "... or migrate to a less complex micro-system."

maddoxt@novavax.UUCP (Thomas Maddox) (06/12/88)

In article <15987@uflorida.cis.ufl.EDU> tws@beach.cis.ufl.edu (Thomas Sarver) writes:
>I personally believe that we won't be able to truly create a machine replicate
>of ourselves simply because no entity can artificially create another entity of
>equal or greater complexity than itself.

	Umm, tell it to your genes.  

michael@stb.UUCP (Michael) (06/19/88)

In article <550@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>I personally believe that we won't be able to truly create a machine replicate
>of ourselves simply because no entity can artificially create another entity of
>equal or greater complexity than itself.

Wrong. Perfect self replicators are possible, so equal complexity isn't a
problem. As for greater complexity, I know its possible if you are given a
blueprint, and I suspect its possible even without it.

			Michael
: --- 
: Michael Gersten			 uunet.uu.net!denwa!stb!michael
:				 ihnp4!hermix!ucla-an!denwa!stb!michael
:				sdcsvax!crash!gryphon!denwa!stb!michael
: What would have happened if we had lost World War 2. Well, the west coast
: would be owned by Japan, we would all be driving foreign cars, hmm...

ken@cs.rochester.edu (Ken Yap) (06/19/88)

|I personally believe that we won't be able to truly create a machine replicate
|of ourselves simply because no entity can artificially create another entity of
|equal or greater complexity than itself.

But even if we accept this claim, if you consider society as a giant
organism, then it is more complex than any of its members so a joint
effort might work, right? Isn't that what the business of scientific
co-operation is trying to do anyway?

	Ken

PS: Yes I know the Americans are trying to keep secrets from the
Japanese who are trying to keep secrets from the Europeans who are
trying ...  But still a fair amount of information and tools flow.