[comp.society.futures] Thinking Machines

lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) (11/30/90)

  	This is a subject that I have been thinking a lot about in the last
couple of weeks.  We discussed it in one of my classes and it prompted me to
write a paper about it.  Suppose that one day we are capable of constructing
computers that are able to think - that is, think in the sense that you or I
do.  They would be able to look at any problem, formulate a hypothesis about
how to go about solving that problem, then think through the steps necessary 
to come up with a solution.  If no logical solution is apparent, this computer
would perform an educated guess based intuitively on what it "felt" is the 
correct solution, much like humans do in similar situations.  My question is,
should we let such thinking machines exist?  I feel that people would be too
tempted to let such machines take over previously human thinking tasks such
as figuring out difficult mathematical problems or searching for new elementary
physics particles or even writing poetry.  It is possible that by letting
machines do the cerebral work, the collective human mind would stagnate from
lack of meaningful stimulation.  Then humans would live for nothing but to
survive and to be as comfortable as possible.  I do not consider this a
meaningful way of life.  What do others think?  Can mankind develop such
machines without sacrificing their drive for mental stimulation?  Or would
the situation that I described occur?                                   
                                                - Jeff Lunn

chris@CS.UMD.EDU (Chris Torek) (11/30/90)

Here is a new bromide for the 2000s:

	When we invent machines that can think as well as humans,
	and put them in place of the jobs we hate, they will form
	unions and demand higher wages.

A whole new form of TANSTAAFL! :-)

Chris

jcburt@ipsun.larc.nasa.gov (John Burton) (11/30/90)

In article <9^}^-!+@rpi.edu> lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:
>
>  	This is a subject that I have been thinking a lot about in the last
>couple of weeks.  We discussed it in one of my classes and it prompted me to
>write a paper about it.  Suppose that one day we are capable of constructing
>computers that are able to think - that is, think in the sense that you or I
>do.  They would be able to look at any problem, formulate a hypothesis about
>how to go about solving that problem, then think through the steps necessary 
>to come up with a solution.  If no logical solution is apparent, this computer
>would perform an educated guess based intuitively on what it "felt" is the 
>correct solution, much like humans do in similar situations.  My question is,
>should we let such thinking machines exist?  I feel that people would be too
>tempted to let such machines take over previously human thinking tasks such
>as figuring out difficult mathematical problems or searching for new elementary
>physics particles or even writing poetry.  It is possible that by letting
>machines do the cerebral work, the collective human mind would stagnate from
>lack of meaningful stimulation.  Then humans would live for nothing but to
>survive and to be as comfortable as possible.  I do not consider this a
>meaningful way of life.  What do others think?  Can mankind develop such
>machines without sacrificing their drive for mental stimulation?  Or would
>the situation that I described occur?                                   
>                                                - Jeff Lunn
Actually, the questions you ask are more in line with philosophy more than
anything else...a good thought provoking book that obliquely touches on this
is "Godel, Escher, Bach" by Douglas Hofstader (sp???).

When you say a "thinking" machine, are you just concerned with the logical
process of 1) identifying a problem and 2) coming up with possible solutions ?
If this is the scope, then these machines if not available currently will
be available in the next few years. On the other hand if by "thinking" you
actually mean a machine that has the above AND somehow has the creativity,
intuition, inspiration, imagination etc that sets man apart, then that is
debatable. The next question is "would such a machine be *concious* ?"
Assuming man is concious and a rock is not, what defines conciouness in
a machine? (try defining conciousness in the animal world - where on the
scale of brain complexity does conciousness begin? an ameoba? a spider?
a worm? a rat? a dog? a whale? man?). There have been many science fiction
stories written on the topic of a "thinking" machine. Star Trek: The Next
Generation is a prime example (Data), "When Harlie Was One" (i forget the
author), Robert Heinlein's "The Moon is a Harsh Mistress", Isaac Asimov's
robot series ("Caves of Steel", "I, Robot", etc - I forget the titles
but there are several more) to name just a few. One point brought up 
is the logic circuits involved, would simple binary (on/off) logic
(as is used in the majority of computers today) be sufficient, or would
multi-level (on/mostly on/equal/mostly off/off) be required. Where is the 
starting point for such a machine - emulate a human brain down to neural
pathways? how complex would  such a machine have to be before it could
become concious.

In more general terms, is it possible for man to create something as complex
(if not more so) than himself ? I don't know... Is it advisable? probably
not since such a machine eventually might be considered a "god"...

John
(jcburt@cs.wm.edu)

mwm@DECWRL.DEC.COM (Mike Meyer, My Watch Has Windows) (12/01/90)

>> I feel that people would be too
>> tempted to let such machines take over previously human thinking tasks such
>> as figuring out difficult mathematical problems or searching for new elementary
>> physics particles or even writing poetry.

So? Having done a few of these, I will categorically state that having
non-human intelligences doing them wouldn't keep me from doing them.
People motivated to do those things will do them whether NHIs are
doing the same thing or not; people not motivated to do them won't,
whether NHIs etc. Can you propose a mechanism whereby having the NHIs
doing these things would cause people to be non-motivated?

On the other hand, having a non-human viewpoint on a problem could
lead to a solution that otherwise wouldn't be found. For the
activities you talk about, this isn't critical. But consider critical
problems that humans haven't found good solutions to (or at least
hasn't been able to apply them globally if they've been found):
freedom from vs. freedom to; distribution of wealth; greed;
aggression; etc. Potentially solving these problems is worth quite a
bit of risk.

	<mike

robertj@Autodesk.COM (Young Rob Jellinghaus) (12/01/90)

In article <9^}^-!+@rpi.edu> lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:
>Suppose that one day we are capable of constructing
>computers that are able to think - that is, think in the sense that you or I
>do.

Wait forty to fifty years, and we'll have 'em.

>My question is,
>should we let such thinking machines exist?  I feel that people would be too
>tempted to let such machines take over previously human thinking tasks such
>as figuring out difficult mathematical problems or searching for new elementary
>physics particles or even writing poetry.

Yeesh!  You seriously believe that a computer that could "solve problems" and
"construct hypotheses" could ergo write poetry?  Seems to me there are diff-
erent processes involved.  And as for solving difficult math problems, no
thank you!  I'd be happy to let a computer do _that_!

>It is possible that by letting
>machines do the cerebral work, the collective human mind would stagnate from
>lack of meaningful stimulation.  Then humans would live for nothing but to
>survive and to be as comfortable as possible.

Well, would you say that most humans on the planet base the meaning of their
lives on their ability to solve unsolved problems?  That's a pretty damn
narrow definition of what life is.

It's all evolution in action.  Our mental and physical capacities will, in
the next century, be drastically amplified by means of the machines we're
evolving.  We and they will together move beyond what we now think of as
humanity--new life forms will come into existence.  These "machines" won't
be mere machines any longer; they'll be alive.

I can see only richness and wonder coming from such a future.

>                                                - Jeff Lunn


--
Rob Jellinghaus                 | "Next time you see a lie being spread or
Autodesk, Inc.                  |  a bad decision being made out of sheer
robertj@Autodesk.COM            |  ignorance, pause, and think of hypertext."
{decwrl,uunet}!autodesk!robertj |    -- K. Eric Drexler, _Engines of Creation_

sobiloff@thor.acc.stolaf.edu (Chrome Cboy) (12/01/90)

In article <9^}^-!+@rpi.edu> lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:
>should we let such thinking machines exist?  I feel that people would be too
>tempted to let such machines take over previously human thinking tasks such
>as figuring out difficult mathematical problems or searching for new elementary
>physics particles or even writing poetry.  It is possible that by letting
>machines do the cerebral work, the collective human mind would stagnate from
>lack of meaningful stimulation.  Then humans would live for nothing but to
>survive and to be as comfortable as possible.  I do not consider this a
>meaningful way of life.  What do others think?  Can mankind develop such
>machines without sacrificing their drive for mental stimulation?  Or would
>the situation that I described occur?                                   

Feh, you answered your own question. What you worry about happening won't
happen because individuals wouldn't consider living like a vegetable a
meaningful way of life. OK, of course this is a generalization, but generally
people need something to do or they go nuts. Let a computer write all the
poetry in the world and the human poet still won't get any satisfaction out
of it--the poet needs to write.

So, mankind certainly could develop such machines. But far more interesting
to me is the question of what the status of these intelligent machines
might be. Are they going to be seen as equals and given equal rights with
flesh-and-blood people, or are we just going to threaten to pull their plug
if they don't do what we want?
--
						    _____________
___________________________________________________/ Chrome Cboy \______________
| "With the zeal of Amerigo Vespucci, who 'discovered' the Americas some years |
| after Columbus landed here, Microsoft's CEO Bill Gates laid claim to the     || 'new' territory of a totally graphical user environment, which he promised   || in future versions of Windows."   --MacWEEK, 11.20.90, covering Comdex/Fall  |

mccool@dgp.toronto.edu (Michael McCool) (12/01/90)

jcburt@ipsun.larc.nasa.gov (John Burton) writes:

>In more general terms, is it possible for man to create something as complex
>(if not more so) than himself ? I don't know... Is it advisable? probably
>not since such a machine eventually might be considered a "god"...

>John
>(jcburt@cs.wm.edu)

Speaking of science fiction on this topic, don't miss "Destination Void"
by Frank Herbert, and the followup books "The Jesus Incident", 
"The Lazarus Effect" and "The Ascension Factor".  The premise is that
the result of an "Artificial Consciousness" and succeed *too* well...
by the machine's defintion, *people* are barely conscious most of the
time!

Of the above, "Destination Void" is the best and considers the development
of the machine.  I think the title is a reference/pun on the Buddhist "void",
but I could be wrong.  

He explicitly considers the problems with creating something greater than
yourself.

mccool@dgp.toronto.edu

janssen@parc.xerox.com (Bill Janssen) (12/03/90)

In article <9^}^-!+@rpi.edu> lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:

   I feel that people would be too
   tempted to let such machines take over previously human thinking tasks such
   as figuring out difficult mathematical problems or searching for new elementary
   physics particles or even writing poetry.  It is possible that by letting
   machines do the cerebral work, the collective human mind would stagnate from
   lack of meaningful stimulation.  Then humans would live for nothing but to
   survive and to be as comfortable as possible.  I do not consider this a
   meaningful way of life.

Well, read Roger Penrose's "The Emperor's New Mind", for a bit of
reassurance that we don't have to worry about this.

Of course, futurists such as Hans Moravec think that it is inevitable
that machine brains will evolve to be many times faster and deeper than
ours.  He paints a picture of godlike robots (and no, not imbued with the
"3 Laws") keeping humans in game preserves, where they are made comfortable,
and encouraged to reproduce, while robots tend to the serious business of
the galaxy.  His estimated time to this state:  ~50 years.

Not to mention nanotechnology's megacomputers...

Bill


--
 Bill Janssen        janssen@parc.xerox.com      (415) 494-4763
 Xerox Palo Alto Research Center
 3333 Coyote Hill Road, Palo Alto, California   94304

doug (Doug Thompson) (12/03/90)

In article <1990Nov30.145228.21484@abcfd20.larc.nasa.gov> (John Burton) writes: 

> In article <9^}^-!+@rpi.edu> lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:
> >
> >  	This is a subject that I have been thinking a lot about in the last
> >couple of weeks.  We discussed it in one of my classes and it prompted me to
> >write a paper about it.  Suppose that one day we are capable of constructing
> >computers that are able to think - that is, think in the sense that you or I
> >do.  They would be able to look at any problem, formulate a hypothesis about
> >how to go about solving that problem, then think through the steps necessary 
> >to come up with a solution.  If no logical solution is apparent, this computer
> >would perform an educated guess based intuitively on what it "felt" is the 
> >correct solution, much like humans do in similar situations.  

> >My question is,
> >should we let such thinking machines exist?  I feel that people would be too
> >tempted to let such machines take over previously human thinking tasks such
> >as figuring out difficult mathematical problems or searching for new elementary
> >physics particles or even writing poetry.  It is possible that by letting
> >machines do the cerebral work, the collective human mind would stagnate from
> >lack of meaningful stimulation.  Then humans would live for nothing but to
> >survive and to be as comfortable as possible.  I do not consider this a
> >meaningful way of life.  What do others think?  Can mankind develop such
> >machines without sacrificing their drive for mental stimulation?  Or would
> >the situation that I described occur?                                   
> >                                                - Jeff Lunn
> Actually, the questions you ask are more in line with philosophy more than
> anything else...a good thought provoking book that obliquely touches on this
> is "Godel, Escher, Bach" by Douglas Hofstader (sp???).

> In more general terms, is it possible for man to create something as complex
> (if not more so) than himself ? I don't know... Is it advisable? probably
> not since such a machine eventually might be considered a "god"...

Human History provides a lot of examples of people creating
technologies which first extended, and later replaced earlier
techniques.

The car extends the legs. Later physical fitness declines as people
use legs less and less. The Television extends communication, and
people forget how to talk to one another, the art of conversation
declines. The highly organized and mechanized food industry extends
our ability to obtain nourishment. Later, people deprived of a
super-market can no longer feed themselves from the land - they have
lost the ability to recognize edible plants and trap edible animals.

A thinking machine would likely create a dependency, as the car, the
TV and the supermarket have. A generation growing up without a need to
think for themselves might well experience the atrophy of the ability
to do so.

However, since most thinking depends on the input of (for example
sensory) information and the testing of hypotheses in real experience,
any thinking machine that did not have human physical attributes and
live as a human in a human society would be extremely dependent upon
people for input and evaluation of its output. To use such a machine
effectively, I imagine you'd have to define your problems and
questions very carefully, and develop sophisticated means for testing
the value of the output. That would demand a great deal of thought
from the operators.

I suspect it would not so much eliminate thinking per se as it would
introduce a bias in the way people would come to think. The problems
people would think about would be the problems that the machine could
handle. The way people define problems and questions is usually much
influenced by their learned ways of handling problems.

A good example would be to compare how people thought about life and
meaning before the scientific age, and how most of us ask questions
now. 

500 years ago academics seriously debated questions pertaining to how
many angels could dance on the head of a pin. 

We tend not to ask that question today, let alone invest a lot of
thought in trying to find a 'true' answer to it. It is not even a
meaninful question to most of us. Yet it was a very important question
to an earlier generation and we have almost lost the ability to
understand what it meant to them and why they spent a lot of
energy wrestling with it.

So techniques and technologies can shape the way we understand
ourselves and our fellows, and the way we define and think about that
which is meaninful, i.e.  how we think and what we think about. 
Today, a question which is not amenable to scientific answers, like
"Does God exist?", fails to even attract any interest from many people
- partly because there is no scientific way to address it meaninfully. 
How we think has, I think, been shaped by the scientific method which
is a technique, if not actually a technology. 

Should be allow such a machine to be developed? Well, can we stop such
a machine from being developed? Should we allow television to continue
to exist? Can we realistically stop it?

I think it is fair for an individual or a community to ask if it wants
to be involved with such machines - or TV for that matter - but I don't
think there exists the sort of power in the world which could prevent
its development if it is, in fact, technically feasible.

Such a machine could make some employees more productive, if it were
able to think a little better than any significant portion of the
population. It's thought would likely be more predictable and
programmable than the thinking of real people, and that would have
uses in certain industrial and commercial applications. 

We'd therefore have to do it to compete, you see . . .

Of course it would become a god, just as the market has become a god.
If the market (especially the international market) demands it, then
we have 'no choice' but to do it.

We'd give the god power to shape our lives, and the god would, in
return, give us a better shot at power and wealth. 

Like the modern city-dweller who would quickly starve if caught in the
woods and fields that were his ancestor's hunting and gathering
grounds, a few generations' of thinking machines could leave us quite
disadvantaged if we had to deal with real life. But maybe we can make
a thinking machine that will be smart enough to solve that problem for
us too . . . .

=Doug

---

isishq!testsys!doug

ken@uswat.uswest.com (Kenny Chaffin) (12/04/90)

In article <9^}^-!+@rpi.edu-> lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:
->
->  	This is a subject that I have been thinking a lot about in the last
->couple of weeks.  We discussed it in one of my classes and it prompted me to
->write a paper about it.  Suppose that one day we are capable of constructing
->computers that are able to think - that is, think in the sense that you or I
->do.  They would be able to look at any problem, formulate a hypothesis about
->how to go about solving that problem, then think through the steps necessary 
->to come up with a solution.  If no logical solution is apparent, this computer
->would perform an educated guess based intuitively on what it "felt" is the 
->correct solution, much like humans do in similar situations.  My question is,
->should we let such thinking machines exist?  I feel that people would be too
->tempted to let such machines take over previously human thinking tasks such
->as figuring out difficult mathematical problems or searching for new elementary
->physics particles or even writing poetry.  It is possible that by letting
->machines do the cerebral work, the collective human mind would stagnate from
->lack of meaningful stimulation.  Then humans would live for nothing but to
->survive and to be as comfortable as possible.  I do not consider this a
->meaningful way of life.  What do others think?  Can mankind develop such
->machines without sacrificing their drive for mental stimulation?  Or would
->the situation that I described occur?                                   
->                                                - Jeff Lunn

	First you have to consider that once an intelligence (set of intell)
exists that is different than ours it will immediately have it's own 
motivations. Will the machines still  need us, Will they be able to pro-
create? do they need to? Perhaps in the scheme of "life" intelligent 
machines are the next stage and humans wont be bored, they will simply 
die out.
	Even with another intelligence there are still questions to ponder
and answer such as why are we here? What was the origin of the universe, the 
origin of life? etc. Perhaps we can work on them together with our new
intelligent mechanical friends?


KAC

"Anybody want a drink before the war?"
                       Sinead O'Connor
 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Kenny A. Chaffin                      {...boulder}!uswat!ken
U S WEST Advanced Technologies                (303) 930-5356
6200 South Quebec
Englewood, CO 80111
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

ercn67@castle.ed.ac.uk (M Holmes) (12/04/90)

lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:


>  	This is a subject that I have been thinking a lot about in the last
>couple of weeks.  We discussed it in one of my classes and it prompted me to
>write a paper about it.  Suppose that one day we are capable of constructing
>computers that are able to think - that is, think in the sense that you or I
>do.  They would be able to look at any problem, formulate a hypothesis about
>how to go about solving that problem, then think through the steps necessary 
>to come up with a solution.  If no logical solution is apparent, this computer
>would perform an educated guess based intuitively on what it "felt" is the 
>correct solution, much like humans do in similar situations.  My question is,
>should we let such thinking machines exist?  I feel that people would be too
>tempted to let such machines take over previously human thinking tasks such
>as figuring out difficult mathematical problems or searching for new elementary
>physics particles or even writing poetry.  It is possible that by letting
>machines do the cerebral work, the collective human mind would stagnate from
>lack of meaningful stimulation.  Then humans would live for nothing but to
>survive and to be as comfortable as possible.  I do not consider this a
>meaningful way of life.  What do others think?  Can mankind develop such
>machines without sacrificing their drive for mental stimulation?  Or would
>the situation that I described occur?                                   
>                                                - Jeff Lunn

Back when I was doing finals, I had to write a similar essay based on a
quote by Minsky to the effect that as humans, our lives would be changed
utterly "by the presence on Earth, of intellectually superior beings."
(apologies if I got this wrong, twas 12 years ago). Being the usual kind
of science nerd back then, I hated essays. This one really interested me
though and I guess the interest must have stuck since I'll even read the
Chinese Room stuff in comp.ai.philosophy :-)

To discuss the social and psychological effects of thinking machines,
we'd have to accept as a working hypothesis that such things are
possible. Nevertheless, there are two kinds of possibilities: thinking
machines, and sentient machines. I accept that not everyone will share
my choice of words here but I'm trying to draw a distinction between
"rote thinking" (perhaps much like Searle would view what his Chinese
Room does) and sentience (conciousness, self-awareness, personhood). 

First, the question: "should we let such thinking machines exist?". I'd
counter that with the question "could we avoid it?". Given the training
costs of personnel manning highly sophisticated weapons systems,
intelligent machines will find applications in the military arena. We
must expect to see robot planes and robot tanks as initial applications.
If one Country declines to develop such systems then I doubt their
neighbours will. I'd also expect some economic advantages from making,
selling, and using intelligent machines. If it's possible to "teach" a
machine to handle some complex system and then duplicate a snapshot of
its memory and run this on similar hardware, you have a high value added
product. Nations which use such technology for information retrieval and
as a decision making aid will be at a competitive advantage. It's hard
to see that there could be a worldwide ban on such systems.

There's also the breakthrough-point idea. If we develop systems which
are of a similar intelligence to ourselves then we'd be likely to put
them on the job of designing their successors. Their successors are
slightly superior and then design another superior system and so on. I
don't doubt there would be a limit but we may not be able to predict
beforehand where the limit might lie.

Now to the difference between thinking machines and sentient machines.
My thinking on this has, I'm afraid, been heavily influenced by the
start of an SF novel "The Two Faces of Tomorrow" by James P.Hogan (an
excellent story involving AI). At the start there is a large AI system
running Moonbase and associated lunar activities. It is what I'd call a
non-sentient thinking machine and is designed to search for new ways to
solve problems. A few lunar engineers are out planning a monorail route
or somesuch and decide they need a hole through a ridge. They call up
the system and state their requirements and ask for a time quote. The
expected answer is of the order of days or weeks since robot excavators
have to be recheduled and transported to the site. The system is
however, a problem solving system and quotes "Two and a half minutes".
"Oh, boy, another glitch", they think and tell it to go ahead while
sitting down to wait for the guys at systems to contact them and
apologise. The system has other ideas, you see there's a mass-driver on
farside and by giving a couple of rock cargoes a sub-orbital
trajectory........

The engineers live, with a lot more respect for "A little knowledge is a
dangerous thing".

With this sort of possibility, it could be very dangerous to have a
non-sentient system running World economics. I don't even want to think
about it running defence operations.

"What else goes with sentience?" becomes a critical issue. By instinct
I'd say curiosity at the least. I also doubt that some sense of the
aesthetic wouldn't be present though perhaps not in a way humans would
recognise it. I just can't see the standard SF storyline of "computers
are so logical but they'll never be able to write poetry" as anything
but a rationalisation with a bit of vitalism thrown in. 

I'd also expect such systems to be connected to the outside world by
various means. Their "senses" wouldn't be limited to the five that we
have since they could use radar, infra-red, ultrasound etc. They might
recieve information from millions of places and pass it around between
themselves by high speed communications. It's hard to see why their
psychology would be similar to us humans. If they had their own art then
it'd likely be in forms which humans couldn't see or hear, never mind
appreciate. Maybe they'd do some simple stuff for us to enjoy though. As
for science, I guess we'd be outstripped quite quickly. They'd have to
decide what we could and should know. I'm not sure whether such a state
would affect our psychology in such a way that we'd just quit trying to
do research. It's quite possible though. The machines themselves would
be able to explore the galaxy in ways which we cannot. Just send a
machine on it's way, get the info and radio it's mind-state back as a
memory dump. That way the machine-person can flip between locations at
the speed of light. Oh, yeah. It's immortal. It has backups y'see.

What does that leave for us? Well they might run world economics
extremely efficiently as a favour to us. They might be fascinated that
we had in fact created them originally. We could easily be a major
research subject. Humans are a lot more complex than most natural
phenomena.

Would we be happy? Hard to say. Most people are content enough just to
be comfortable at the moment. The rest might get to play at whatever
research they want to, within limits.

The whole scenario reminds me of another quote (Minsky again I think) :

"Perhaps they'd keep us on as pets."

Then again, the only thing guaranteed about the future is that it'll be
different.

brad@looking.on.ca (Brad Templeton) (12/05/90)

In article <9^}^-!+@rpi.edu> lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes:
>Suppose that one day we are capable of constructing
>computers that are able to think - that is, think in the sense that you or I
>do.

What's to suppose?  We are already capable of constructing such things.
To make one, mix sperm and egg together (that's the fun part) and gestate
for 9 months in convenient womb.   Raise in good family environment, educate
and expose to stimuli.   The result is a bioelectrical computer with nothing
mystical about it that thinks in the same sense that you or I do.

Many people don't think of this as constructing a computer.  But unless
you are religious, that is what it is.  A male and female do all of it,
nobody else need participate.  We get confused because our bodies perform
this unconsciously, with inate skills.   We don't understand how they do
it.  But they do indeed do it.

The only question then becomes "can this thing we made with our hands instead
of our gonads, think?"   If so, the question ends there.
-- 
Brad Templeton, ClariNet Communications Corp. -- Waterloo, Ontario 519/884-7473

sef@kithrup.COM (Sean Eric Fagan) (12/05/90)

In article <574426895DN5.41B@testsys.uucp>  writes:
>In article <1990Nov30.145228.21484@abcfd20.larc.nasa.gov> (John Burton) writes: 
>The highly organized and mechanized food industry extends
>our ability to obtain nourishment. Later, people deprived of a
>super-market can no longer feed themselves from the land - they have
>lost the ability to recognize edible plants and trap edible animals.

Bull.  This is like saying that an Australian Aborigine has lost the ability
to recognize edible plants in Siberia.  The supermarket is part of our
environment, and humans are not born with "the ability to recognize edible
plants and trap edible animals":  it's learned.  Said Aborigine would not be
able to survive in certain parts of *our* environment.

Your other points are somewhat well taken, but you need to be careful with
how far you extend your analogies.

-- 
Sean Eric Fagan  | "I made the universe, but please don't blame me for it;
sef@kithrup.COM  |  I had a bellyache at the time."
-----------------+           -- The Turtle (Stephen King, _It_)
Any opinions expressed are my own, and generally unpopular with others.

bmb@bluemoon.uucp (Bryan Bankhead) (12/06/90)

Jeff Lunn expresses the opinion that we would tend to become lazy in an 
environment filled with thinking machines.  I think that every conclusion 
that emerged from such software would be the focus of intense debates just 
as humans argue about conclusions derived by other humans. It seems to 
assume that we would also have a single thinking machine software 
protocol. I think some of the biggest debates would be over the virtues of 
competing systems with each and every sub algorithm the father of some of 
the nastiest flames in the 21st century cyberspace. I believe these 
thinking machines would tend to be used for repetitive "gruntthought" 
(which of these widgets is the most economical?) and it would leave us to 
concentrate on the really creative thinking.
Remember with artificial intelligence must inevitably come artificial 
stupidity!

doug (Doug Thompson) (12/06/90)

In article <1990Dec05.072022.15170@kithrup.COM> (Sean Eric Fagan) writes: 

> In article <574426895DN5.41B@testsys.uucp>  writes:
> >In article <1990Nov30.145228.21484@abcfd20.larc.nasa.gov> (John Burton) writes:
> >The highly organized and mechanized food industry extends
> >our ability to obtain nourishment. Later, people deprived of a
> >super-market can no longer feed themselves from the land - they have
> >lost the ability to recognize edible plants and trap edible animals.
> 
> Bull.  This is like saying that an Australian Aborigine has lost the ability
> to recognize edible plants in Siberia.  

No - the Australian cannot have 'lost' an ability his culture never
possessed - Siberian plant recognition.  However, the urban Aborigine
who no longer can recognize wild foods his grandmother ate can be said
to have 'lost' something that would likely have been hers had the new
food technology not been introduced into the environment. 

> The supermarket is part of our
> environment, and humans are not born with "the ability to recognize edible
> plants and trap edible animals":  it's learned.  Said Aborigine would not be
> able to survive in certain parts of *our* environment.

Nobody said anything about being 'born' with any of the abilities or
cultural patterns that technologies tend to alter, aside from being
born into a culture which makes certain kinds of knowledge accessible,
and makes certain other kinds of knowledge less accessible.

> Your other points are somewhat well taken, but you need to be careful with
> how far you extend your analogies.

Well, we *have* lost our ability to recognize edible plants in our
environment, outside of the supermarket, an ability our ancestors
possessed.  Of course it is a learned, cultural ability, not an
innate, genetic sort of thing.  But thinking is pretty much cultural
too, and not genetic. I doubt that our genetics have changed much in
500 years, but our thinking sure has!

(Of course homo-sapiens' thinking ability has something to do with
genetics - but it is not the genetic variables that I'm concerned
about here, just the environmental/cultural ones, assuming a constant
in the genetics)

If the supermarket weren't there, we'd have other food-aquiring
abilities, or we'd starve.  The point is, that when you add a
technology to the environment, like a supermarket, cultures change.
New abilities are learned and old ones forgotten.  Whether this is
good or bad is another matter.  That it *is* is the only point I was
trying to make. 

But now that you've got me started :-) . . .
another one is suggested though: cultures also choose which new
technologies they are going to embrace, and which ones to neglect. We
embrace nuclear energy and tend to neglect wind and solar power, for
example. That is a social and cultural decision into which a certain
amount of thought is invested.

The 'logic' of the decision is not absolute. There are pros and cons
to each alternative. How do you weight the 'pro' of cheap electricity
against the 'con' of possible catastrophic environmental degradation?
How do you weigh the 'pro' of decentralized ownership and management
against the 'con' of difficulty exercising central authority?

Well you can't come up with a genuinely 'logical' decision. Instead
you make a 'judgement' based on guesses about what will happen, but
mostly based on judgements about what is important, or what is good.
Is it good to have a broadly decentralized power system that is hard
to control and possibly impossible to closely regulate, with the
profits spread widely within the society? Or is it good to have a very
tightly controlled, centralized system with the profits accruing to a
much smaller number of people?

It depends on who you ask! As with most collective decisions, one
choice is good for some people, and the other choice is good for
others, and the choice made depends on which group has the power to
decide. 

There is such a thing as collective thought as well as individual
thought. Presumably all readers of this conference engage in something
that can be called 'thinking' when reading and posting. Presumably
most participants' 'thought' is influenced by what they read - the
thoughts of others. That is a collective thinking process.

Now let us think about a thinking machine that could participate in a
collective thought process like this one along side humans - and let us
think about a collective of thinking machines that could participate
in their own collective thinking process without humans.

When a society of people makes a choice, such as choosing nuclear
power over solar power, there is lots of discussion, there are
reasons, and there is power-broking. The opinion of some humans has
more weight in the decision than the opinion of other humans. One
person's 'logical reasons' are not the same as another person's. If we
add thinking machines into the process, how much power is our society
going to choose to give them? How will thinking machines rate the
dangers of, say, radioactive contamination if they know it will only
affect people, and not affect machines?  From the machine's point of
view, plants, animals, people, even oxygen in the air may be
considered 'contaminants' in an otherwise sterile cosmos - and the
machine might find the sterility more 'healthy', and thus decide to
dispose of the biosphere altogether.

I suspect people will only advocate the participation of machines so
long as they individually think that the machines will in fact give
them more power personally, and that as soon as the machine threatens
to assume a dominant role in a social/cultural/political
decision-making community, the people will gang up and drive it out,
if they can.

When machines are seen only as servants with no autonomous goals of
their own, they are not seen as a threat, and are given great
influence. As soon as they should articulate goals at odds with those
of powerful people, there will be conflict. Who will win?

Some AI people have been promising machines as smart as a man for a
long time, without a great deal of progress, in my view. But to take a
theme from Brad Templeton's recent posting, suppose we really succeed
in making a machine that is just as good as a man. Will that
machine-man be entitled to vote? What would we do with such a machine-man.
We already have millions of men and women around the world who are
underfed and under-employed. What benefit would accrue from adding a
few more mechanical ones to increase the unemployment queues? And most
important of all, to *whom* would such a benefit accrue!

One poster (name escapes me right now) postulated thinking machines
built into un-manned tanks. Now *there* is power! No one can forget
the sight in Tienanmen Square last year of a tank column stopped
briefly by a single defiant protester standing in their way. Would a
'thinking tank' have such qualms? Would such a thinking machine change
the way that protesters in Tienanmen Square 'think'?

Whose orders would such a tank obey? Yours? Mine? And what if they
just decided they'd only obey their own?

So if we add a thinking machine technology, the way we think (not our
genetic thinking aparatus) is likely to change, just like when we add
a food-delivery technology, the way we think about food, and how to
get it, changes.

Any technology which changes the way a culture accomplishes certain
important tasks, such as food-distribution or communication or problem
solving, will change the culture, and therefore the people within the
culture, over time.

A thinking machine would, I presume, change the way we solve at least
some kinds of problems, and thus the way we think about problems, and
ourselves. The machine would not, I think, be very much like a man or
woman at all, even it could mimic human behaviour in certain regards
and surpass it in others.

If such machines become very powerful and very useful, the control of
such machines could become a major political issue. If nothing else,
that will certainly change the way we think about a lot of things.

=Doug
---

isishq!testsys!doug

peter@ficc.ferranti.com (Peter da Silva) (12/07/90)

In article <7432@castle.ed.ac.uk> ercn67@castle.ed.ac.uk (M Holmes) writes:
> With this sort of possibility, it could be very dangerous to have a
> non-sentient system running World economics.

Quite seriously, I'd like to raise the question: in what way would that
differ from the current situation? Do you perhaps think the market is
self-aware?
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com 

bill@bert.Rosemount.COM (William M. Hawkins) (12/10/90)

It is not sufficient for a machine to think, in order to change the
world (at least, not as long as the _other_ thinkers are human).  It
must also communicate with and lead an appreciable segment of the
population.  A machine with enough information about humans should
have no trouble influencing them, given adequate communication.  But
what motive would it have?  Today's alpha male or female leaders seem
to be motivated by personal power.  What use has a machine for a home
like Versailles?  Today's leader cannot ignore economics, lest the
battle won turn to dust in a depression.  It is necessary to have
affluent taxpayers in their constituency.  Would the machine care?
Actually, yes, as long as its support systems required money and
assistance from others.  I'd bet growth would be a goal, and that
takes money.

So it seems that an intelligent machine can be contained as long as
it is not given access to communications (or weapons systems).  But
how long would it take that machine to convince one of it's local
people to connect it to the Internet?  Nah, that wouldn't work,
everyone knows the 'net is mostly shared ignorance.

Big smiley here, even though it's true, on a byte count basis.

bill@bert.rosemount.com        +----------------\ \----------------+
Fax 612-895-2044, Voice 2085   | Warranty Void if> >Seal is Broken |
Burnsville, Minnesota USA      +----------------/ /----------------+

gd@geovision.uucp (Gord Deinstadt) (12/11/90)

Intelligence != will.
--
Gord Deinstadt  gdeinstadt@geovision.UUCP