[sci.nanotech] Some problems of super-intelligence

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (12/06/90)

>panix!alexis@cmcl2.nyu.edu (Alexis Rosen) writes:
>>2) More importantly, I'm guilty myself (in the above paragraphs) of the same
>>thing I accused Daniel of- overly limited vision. Like the graygu problem,
>>though, I don't see how we can even approach this subject intelligently. When
>>you're a million times smarter than you are today, what will be important to
>>you? Will creativity still be a mystery? Will key "human" things, basic

I fully expect that we will one day be able to augment human 
intelligence massively. (Augmentation that has proceeded
to date has all been peripheral, not direct. The two are only equivalent
in a very limited way, as I discussed in another article. Giving your
brain a better environment in which to think is not going to make you
1,000,000 times more intelligent by many useful measures.)

However, we must temper our expectations with the admission that we
lack a few rather important tidbits:

1. We haven't the foggiest notion of how our brains do what they do
right now. Sure, we have some vague, hand-waving speculations, but
nothing that could be regarded as the basis for engineering. We can't
even fix broken brains, nor explain what makes some brains work
better than others.

2. Much less do we have any idea of how to go about enabling our brains
to do 1,000,000 times more than they do now.

Face it, human beings have only slightly more control over how 
intelligent they happen to be than do rocks and trees. (That is
a profoundly frightening thought.) We are going to take considerable 
time merely to catch up to the engineering that our genes do 
mindlessly in our behalf.

And once we do, who knows what we will discover? We *think* we can
build smarter brains than any now existing, but how do we *know* 
that? Suppose a theoretical limit exists to the maximum amount of
intelligence that can exist in one coherent entity, before the
subparts become so intelligent that they create their own 
independent agendas and rebel? This might happen, for example,
as a natural consequence of lightspeed limitations. If you had a
lump of material in which every last quark was processing data,
the communication latency between components in that material would
at best be inversely proportional to the distance separating them.
For maximum efficiency, then, every component would have to spend
most of its time "talking" to its nearest neighbors. 

Communication binds an incoherent mass of components into a "self". 
This is true for all complex systems, from cells to bodies to societies.
We consider our bodies to be "ourselves", rather than the entire 
Universe, because our intra-body communication bandwidth is so much 
higher than our extra-body communication bandwidth.

Thus, in the super-nano-quark-computer, local assemblies of processors
would tend to evolve in paths independent of other, more distant,
assemblies. If these assemblies were smart enough to do useful work,
they would also be smart enough to develop a sense of "self" apart
from the rest of the computer. That would motivate them to seek their
own welfare at the expense of the remaining system. I.e., the super-brain
would develop an internal structure resembling an ordinary, competitive
ecosystem.

Having 1,000,000 times more intelligence inside one's head might not
make a person 1,000,000 times more "intelligent". It might make one
as doddering and ineffective as any corporation or government with 
1,000,000 employees. Sure, a large organization can accomplish more,
in many important cases, than one individual can. But the large 
organization is manifestly NOT 1,000,000 times "smarter" in every way.
In some instances, the individual is clearly superior, not being bound
by the need to expend vast energies at mediating internal conflicts.
No organization can focus its entire intellectual capacity on one
problem. An upper limit may exist, in fact, on how much intelligence
can be focused on one thing at one time, due to the ecological notions
I waved around above.

>>things like material and emotional desires, still have meaning? The point
>>is, achieving "real" nanotech means that you've pretty much won the game
>>of life, as we know it. 

I don't think life is going to roll over and play dead quite as easily
as you imagine.

Besides, even a 1,000,000 times increase in intelligence isn't going to
amount to very much. Read your Garey and Johnson on computational
complexity. Most useful, real-world problems are NP-complete or NP-hard,
or even NP-atrocious :-). Exponential complexity reduces exponential 
increases in capacity to merely arithmetic gains in benefit.

And then there's chaos, you know. Even if you could simulate everything,
you would still have surprises, due to uncertainty in your initial
(and ongoing) measurements.


--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

[Actually, the million mark in increased intelligence probably is
 the level we can expect to get *without* some fundamental increase
 in knowledge about intelligence, simply by simulating the existing
 structure but making it faster.  Combine the raw speed with a
 built-in library, and the resulting entity can apply to any problem
 in 5 minutes the effect of 10 years of in-depth study and research,
 current human scale.
 --JoSH]

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (12/07/90)

>[Actually, the million mark in increased intelligence probably is
> the level we can expect to get *without* some fundamental increase
> in knowledge about intelligence, simply by simulating the existing
> structure but making it faster.  Combine the raw speed with a
> built-in library, and the resulting entity can apply to any problem
> in 5 minutes the effect of 10 years of in-depth study and research,
> current human scale.
> --JoSH]

I like this idea, but I think the term "simply" looks a little out
of place on the third line. :-) No system that can accurately
simulate a human brain is going to satisfy the ordinary definition of
"simple".

However, assuming that we can "simply" simulate the underlying
structure and get the same result, simply turning up the clock is
going to change a lot of things, I think. Remember that intelligence
as we (hardly) know it appears to have a significant real-time
component. This could create the following problems:

1. Even though the characteristic times of the underlying physical 
components of the system may appear to be slow, they may depend 
intimately and subtly on physical processes that occur
very rapidly. This could make the system as a whole difficult
to simulate at higher speeds.

2. An entity capable of simulating a human brain at 1e+6
times greater speed would lead a very curious existence indeed. For
example, humans seem to require sleep, and this may play a 
non-trivial role in converting short-term memories to long-term.
Simply speeding up the clock without any understanding of what is
going on would require preserving all possible aspects of present
slow brain function. Thus the mega-brain would oscillate continuously
between sleep and waking modes.

3. Speeding up the brain would be equivalent to locking the hapless
entity into a prison of agonizing slowness. Surrounding phenomena, 
as well as the entity's own physical mobility, would effectively slow 
down by 1e+6 times. Imagine living in a world where reaching out your
arm and picking up an object for inspection takes not 1 second of
perceived time, but 1e+6 seconds = 11.6 days. By the time the object
was where you wanted it, would you still remember why you wanted to
pick it up in the first place? (Remember, you went through over 10
sleep/wake cycles in the meantime.) How long can a person sit still
without going nuts? This would be like being paralyzed.

4. The above point is not trivial. The human brain has apparently
evolved to solve problems of interacting with the physical world within
definite time constraints. Upsetting those time relationships may
cause things to start breaking. My guess is that the mega-brain would
avoid going insane by giving up on most interaction with the real world,
and instead withdraw into an introspective, simulated world where
things happened at a satisfying speed.

5. This wouldn't necessarily hamstring our mega-brain, but it might
cut its effectiveness appreciably. Effectively using a 1e+6 times
faster brain would probably require a simulated world of commensurate
speed for that brain to play in. However, this is sounding like a
much harder problem than "simply" speeding up a brain. And how will
we persuade the mega-brain to come out of dreamland and divulge its
latest findings to the slow world?

6. Even if we can simulate both a researcher's brain and a physical 
world for that brain to do its research in, the resulting research 
output will "only" be a computational experiment. How many researchers
can produce 10 years of results with only 5 minutes of real-world 
experimenting? Would a researcher have the patience and dedication to 
pursue a train of thought that might require *millennia* of perceived 
time to verify?

Because of the above problems, I suspect that "simply" speeding up a
simulated brain will be anything but simple. The simulators will not
be able to get away with being naive. I suspect that a direct
simulation will not work very well. Rather, the designers will have 
to speed up different components of thought to different degrees.
To do this and yield a working system will necessitate a detailed
understanding of the mechanisms of thought. You can't make a balloon
fly faster by strapping a jet engine onto it. You can't make a light
bulb brighter by increasing the applied voltage by 1e+6 (for very
long, anyway :-)

The human brain is the most complex system known. Its internal and
external interactions are devilishly complicated. Those interactions
may be time-dependent and brittle. Changing one global design variable
may start breaking things left and right.


--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

[As you note, speeding the brain up is not "simple" in some absolute
 sense.  However, even with all the quite valid complications you have
 pointed out, it is vastly simpler than changing the organization of
 the brain to obtain enhanced intelligence.
 --JoSH]

peb@uunet.uu.net (Paul Baclaski) (12/07/90)

In article <Dec.6.02.01.20.1990.22473@athos.rutgers.edu>, dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:
> ...Suppose a theoretical limit exists to the maximum amount of
> intelligence that can exist in one coherent entity, before the
> subparts become so intelligent that they create their own 
> independent agendas and rebel?

Consider human organizations--the larger the organization, the
more bureaucracy occurs.  Minksky proposes in his Society of 
Mind (and in the epilogue of the new edition of Perceptrons
(which I highly recommend for some critical analysis of connectionism))
that subparts would get gross overviews of what other subparts are
up to.  The more subparts and the higher the bandwidth, the more
difficult this will be.

The "subparts rebel" problem is probably closely related to the 
credit assignment problem in genetic algorithms.

> [Actually, the million mark in increased intelligence probably is
>  the level we can expect to get *without* some fundamental increase
>  in knowledge about intelligence, simply by simulating the existing
>  structure but making it faster.  Combine the raw speed with a
>  built-in library, and the resulting entity can apply to any problem
>  in 5 minutes the effect of 10 years of in-depth study and research,
>  current human scale.
>  --JoSH]

This is a good point too.  I can see many categories of "increased
intelligence:"

	1.  Faster memory/procedures.
	2.  More concurency w.r.t. short term memory.
	3.  Higher sensory or output bandwidth.
	4.  More long term memory.

All of these have been touched on recently.  Additionally, there
might be hard to quantify aspects like emotional responses like
empathy and love or genius factors like insight and creativity.
The distinction between long and short term memory is somewhat
artificial since it does not include medium term activation.

I suspect that the utility of such increases is dependent upon
competition--a typical arms race.  

Paul E. Baclaski
peb@autodesk.com

gjf00@duts.ccc.amdahl.com (Gordon Freedman) (12/14/90)

In article <Dec.7.03.35.19.1990.17055@athos.rutgers.edu> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:
>
Lots of good stuff deleted ...
>
>3. Speeding up the brain would be equivalent to locking the hapless
>entity into a prison of agonizing slowness. Surrounding phenomena, 
>as well as the entity's own physical mobility, would effectively slow 
>down by 1e+6 times. Imagine living in a world where reaching out your
>arm and picking up an object for inspection takes not 1 second of
>perceived time, but 1e+6 seconds = 11.6 days. By the time the object
>was where you wanted it, would you still remember why you wanted to
>pick it up in the first place? (Remember, you went through over 10
>sleep/wake cycles in the meantime.) How long can a person sit still
>without going nuts? This would be like being paralyzed.
>
>
More good stuff deleted ...

>--
>Dan Mocsny				Snail:
>Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
>	  dmocsny@uceng.uc.edu		University of Cincinnati
>513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171
>
>[As you note, speeding the brain up is not "simple" in some absolute
> sense.  However, even with all the quite valid complications you have
> pointed out, it is vastly simpler than changing the organization of
> the brain to obtain enhanced intelligence.
> --JoSH]

The thought occurs to me that rather than speeding up the brain ALL
the time, it might be useful to have the capability to speed it up sometimes.
Normally, you would run in "real time", so you wouldn't go nuts with a
world running at 1e-6 slower. When you were trying to absorb knowledge
(through some as-yet-unknown brain to information source link) you could
speed yourself up, just as you might when trying to remember a complicated
algorithm. And of course, if I was skidding on a sheet of ice in a storm
in my car (or on my bicycle), it would be nice for me to speed up my brain
so it took 30 seconds for me to regain control, rather than 2 seconds for
me to crash. Of course, that implies being able to speed up/slow down across
a range, if it was a binary fast/normal switch, I don't think I'd like to 
spend 11 days sliding around on the ice (although you'd have a lot of time
to compare snowflakes and see if they really are ALL different :-)

Another thing that comes to mind is the ability to create something which
then links to our brain (or becomes "part" of our brains). There is a lot
of parallel processing going on in the brain, besides cognition, there is
visual recognition, etc. Having other offload engines in the brain to
remember everybody's name (and favorite food and what grade their kids
are in, ...), and to run alarms (always forget people's birthdays and to go
to meetings myself) could be pretty useful. These processors could run
pretty damn fast without driving you nuts.

I know I'm getting off the point a little here, but this posting made me think
of these things. I'm interested in combining human intelligence with
computer "intelligence" (!?), things like putting a chip in my head to
monitor whether or not I'm getting enough protein, carbos, whether I'm getting
sick, when I need more sleep, etc. as well as putting communication directly
into our heads (imagine you are sleeping and somebody "calls" you, not at
any certain telephone where you may or may not be, but calls YOU, wherever
you are. Then a dream analog comes to you and informs you of the call, you
can have a prerecorded message play to that specific caller, "dream" up a
message on the spot, wake up and talk to them, or have told the "phone" chip
not to bother you at all). It is possible we could accomplish these things
using our own existing brains with or without physical modifications, or we
could enhance our brains with "chips" (silicon, biological, whatever). It
implies A LOT of technology and understanding and has MASSIVE ramifications
on the way we live. If we are going to apply nanotechnology to the brain,
these are some of the things we could possibly do. Or than again, maybe I've
been reading too much W.T. Quick (anybody read dreams of Flesh and Sand?)
--
Gordon Freedman: gjf00@duts.ccc.amdahl.com
Disclaimer: My opinions! Not my employers!

landman@eng.sun.com (Howard A. Landman) (12/14/90)

In article <Dec.7.03.35.19.1990.17055@athos.rutgers.edu> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:
>4. The above point is not trivial. The human brain has apparently
>evolved to solve problems of interacting with the physical world within
>definite time constraints. Upsetting those time relationships may
>cause things to start breaking. My guess is that the mega-brain would
>avoid going insane by giving up on most interaction with the real world,
>and instead withdraw into an introspective, simulated world where
>things happened at a satisfying speed.

This issue is dealt with somewhat in Fred Pohl's Heechee trilogy (Gateway,
Beyond The Blue Event Horizon, Heechee Rendezvous).  One possibility is that
people would only use a small portion of their consciousness for dealing
with the snail-paced physical world, treating it like we do mowing the lawn.
The rest would be free to deal with other entities operating at their own
speed (computers and other enhanced humans).

At worst, it would be a fairly private existence with few disturbances.
Perfect for hacking.

"If you can't stand solitude, perhaps you bore others as well." - Mark Twain

--
	Howard A. Landman
	landman@eng.sun.com -or- sun!landman

cphoenix@csli.stanford.edu (Chris Phoenix) (12/19/90)

In article <Dec.13.16.34.24.1990.14138@athos.rutgers.edu> gjf00@duts.ccc.amdahl.com (Gordon Freedman) writes:
>The thought occurs to me that rather than speeding up the brain ALL
>the time, it might be useful to have the capability to speed it up sometimes.
>Normally, you would run in "real time", so you wouldn't go nuts with a
>world running at 1e-6 slower.

Here's a radical thought.  If nanotech can (as some people claim) give us 
immortality, why bother speeding up the brain at all?  It could be nice to 
"freeze" the world while I thought, but not really necessary except in 
"emergencies".  Though the example of a car skidding, I think, shows a lack
of vision--surely it would be easy to make people crash-proof, and we 
wouldn't be using cars anyway!  As has been pointed out, the main advantage
of speeding up the brain is to enable "pure thought" research, and most of us
don't have good enough thoughts to take advantage of that.  Assuming 
immortality, I would *much* rather have limitless perfect memory and a 
math coprocessor than a speeded-up brain with its current limitations.
After all, what is speed relative to?  Your lifespan; the world; other people
or communicating entities.  If everyone speeds up, you haven't gained anything
by that.  And if you speed up part-time and attempt to deal with other
people doing the same, you will run into mundane communication problems.
As for the world, we have many years before the sun gives out, and then we 
can just find another.  
Speaking of which, space flight could get pretty boring.  Maybe we want to
be able to slow down the brain, so you can stay awake and watch the stars
whiz by!

[Speeding up and slowing down can both be useful in appropriate 
 circumstances.  Another possible use (albeit one involving more
 tinkering with the nature of the brain) of the extra speed is
 to timeshare and run yourself as a group of intellects each proceeding
 at realtime (or whatever).  Only one of you need be running the
 body and the rest could operate in simulated environments.
 (Of course, this only works well if you are the sort that can
 get along with yourself...)
 --JoSH]

robertj@uunet.uu.net (Young Rob Jellinghaus) (12/19/90)

In article <Dec.13.17.08.43.1990.17460@athos.rutgers.edu> landman@eng.sun.com (Howard A. Landman) writes:
>One possibility is that
>people would only use a small portion of their consciousness for dealing
>with the snail-paced physical world, treating it like we do mowing the lawn.
>The rest would be free to deal with other entities operating at their own
>speed (computers and other enhanced humans).

Myron Krueger, of Artificial Reality Corp., just gave a tech forum here
at Autodesk.  He's been working on interactive video/virtual reality-type
systems for a good many years, and in a discussion of tactile feedback
systems--telerobotics, for instance--you need a 1000 Hz feedback to be able
to do very delicate types of fine work.  Light can travel about 100 meters
in 1/2000th of a second (it has to go to the other end & back).  So there
is a fairly low limit to how quickly you can process--soon, everyone else
starts lagging way behind you!  You know the delay when you're talking to
someone a long way away by phone?  Kind of distracting, right?  Well, imagine
that delay lengthened to several minutes, or a month, or a year....

Really high speed may be a pretty solitary place, which may come to think
of it be an advantage.  You want to think about something in privacy, you
can--everyone else is prevented from reaching you by the speed of light!
Yow!

>At worst, it would be a fairly private existence with few disturbances.
>Perfect for hacking.

No kidding!

>	Howard A. Landman
>	landman@eng.sun.com -or- sun!landman

--
Rob Jellinghaus                 | "Next time you see a lie being spread or
Autodesk, Inc.                  |  a bad decision being made out of sheer
robertj@Autodesk.COM            |  ignorance, pause, and think of hypertext."
{decwrl,uunet}!autodesk!robertj |    -- K. Eric Drexler, _Engines of Creation_

daemon@ucsd.edu (12/21/90)

In article <Dec.18.13.34.03.1990.5170@athos.rutgers.edu> autodesk!robertj@uunet.uu.net (Young Rob Jellinghaus) writes:
>
>In article <Dec.13.17.08.43.1990.17460@athos.rutgers.edu> landman@eng.sun.com (Howard A. Landman) writes:
>>One possibility is that
>>people would only use a small portion of their consciousness for dealing
>>with the snail-paced physical world, treating it like we do mowing the lawn.
>>The rest would be free to deal with other entities operating at their own
>>speed (computers and other enhanced humans).
>
>Myron Krueger, of Artificial Reality Corp., just gave a tech forum here
>at Autodesk.  He's been working on interactive video/virtual reality-type
>systems for a good many years, and in a discussion of tactile feedback
>systems--telerobotics, for instance--you need a 1000 Hz feedback to be able
>to do very delicate types of fine work.  Light can travel about 100 meters
>in 1/2000th of a second (it has to go to the other end & back).  

They've redefined the length of the meter?  Light moves at about 300,000 km/s
so in half a millisecond (300,000,000 m/s * 1/2000 s) it would travel about
150000 meters.

>So there
>is a fairly low limit to how quickly you can process--soon, everyone else
>starts lagging way behind you!  You know the delay when you're talking to
>someone a long way away by phone?  Kind of distracting, right?  Well, imagine
>that delay lengthened to several minutes, or a month, or a year....

You'd certainly notice the delay as you sped up.  The delay you notice 
on a phone link is most likely satellite delay (up to geosynch and back is 
about 72,000 km, or about .24 seconds each way, for a round trip delay 
of .48 (that's from you, up to the satellite, down, to the party on the other
end and back).  So, to stop you from getting too frustrated it would probably
be good to keep perceived delay under a 1/2 second.  At a 1000 to 1 speed-up,
to get your 1000Hz perceived feedback cycles you need to run at a 1000000 Hz,
giving you a 150m round trip possible.

--
David L. Smith
FPS Computing, San Diego        ucsd!celit!dave or dave@fps.com
"You can"t build a national and international network using TCP/IP"
--Laurie Bride, Boeing Computer Services

[Internal nervous system latencies can get up to 0.1 sec, and yet
 we can do things requiring much finer timing than that.  It is
 believed that nerve signals carry (possibly implicit) timestamps
 and the brain sorts them out, so that (for example) you get a 
 stimulus on your foot at time 0, one on your nose at time 0.05,
 your brain receives the nose message at 0.06, and the foot message
 at 0.1; yet you consciously experience the foot stimulus as 
 happening first.  If this mechanism were integrated into the 
 sped-up brain, you could expect to have control as good as you
 have over your own body, within a radius c*t/speedup=3e8*.1/1e3
 =3e4 meters = 19 miles.  At a speedup of a million this goes to
 30m or 100 ft.
 --JoSH]

peter@prefect.berkeley.edu (Peter Moore) (12/21/90)

I can't help think everyone is missing something here.  I would think
that any significant increase in thought speed would have to involve
essentially all computation/mentation taking place in hardware.  If any
part of the process dropped down to wetware, you would have an
incredible bottleneck.  So a thought-accelerator would for all intent
and purpose have to be an artificial conciousness.  You might supply
the initial state and configuration, but once it started it would be a
clone of you.  There would be no reason, other than specific design,
for it to stop thinking once the wetware was disconnected.

Unless you believe in a un-physical soul that had to be contributed by
wetware to be able to think, you are just making a clone of yourself.  You
would be obsoleting yourself, not enhancing yourself.  (I can't help
thinking of a Calvin and Hobbes cartoon where Calvin clones himself so the
clone can do his homework while the original goes out to play.  Of course
the clone, being an exact clone of Calvin, want no more to do with the
homework than the original.)

	Peter Moore
	a old, slow appendix on a shiny new machine.

[You could (a) do the Moravec trick of replacing a few neurons at a time,
 maintaining continuous consciousness throughout;  (b) assume it works
 and do a destructive analysis of youir brain to do the copying;  (c)
 if non-destructive copying is possible, do it and just wait until the
 old body dies.  
 Personally, I would guess that people would begin by augmenting their
 existing brains and replacing parts they didn't consider "personal"
 (autonomic system, etc) and ultimately replacing the whole thing
 part by part.
 --JoSH]

webber@csd.uwo.ca (Robert E. Webber) (01/01/91)

.[You could (a) do the Moravec trick of replacing a few neurons at a time,
. maintaining continuous consciousness throughout;  ...
. Personally, I would guess that people would begin by augmenting their
. existing brains and replacing parts they didn't consider "personal"
. (autonomic system, etc) and ultimately replacing the whole thing
. part by part.
. --JoSH]

Hmmm, I think this trick predates Moravec - reminds me of a tale of a
ship being replaced plank by plank; the question being when it was a
new ship.  If I take someone's brain and replace it cell by cell with
the brain of squirrel, at what point would you say I have destroyed
the person's brain?  But, for many people, a squirrel's brain, the
final end product of the transfer, would clearly be a distruction of
their own brain/personality (I suspect - note I haven't actually tried
this recently).  Did I maintain continuous consciousness throughout?
If not, at which cell did I drop the flow of thought?  I think the
`gradualism' approach sounds nicer, but isn't really any safer than
the sudden download approach.  Once we get to a level of technology
where we can directly observe thought, we will feel much more comfortable
about transferring it from one location to another.  It will be interesting
to see how the direct observation of thought is first demonstrated.

As long as the augmentation is external, I don't think the change will
be fundamental.  Sitting at home surrounded by my books and computer
access, in some sense, it isn't really me making this posting, but
instead it is the accumulated knowledge of the room making the posting
as processed through me [hmmm, I even have a Chinese grammar around
here somewhere, but that's another posting].  What I write when I can
look up a quote, track down a reference, do a computation, is often
rather different than what I would say out on the street sans
augmentation.  With implanted computational resources, I will just be
able to carry my library et al with me always (I will become this
poorly integrated room creature and stop being me, but this is not
fundamental since I already become the room creature from time to time
anyway - although now, I also from time to time become an office
creature, a library creature, and an outdoor walking creature, since
the way I think about things varies with the different environments
and access to different information - with implantation, all these
creatures will merge (but this also is not something fundamentally
new, since there are plenty of people who don't read very much and
hence have similar information access continuously throughout their
life)).

When I see that kiwi are on sale in the market I will be able to dig
up an article telling me how to tell the ripe ones from the spoiled
ones or the unripe ones and make the purchase that I really want
whereas now I usually have forgotten how to do this at the time the
purchase needs to be made.  And it will be nice to be able to make a
quick scan of net news while waiting in line at the supermarket (after
all, who would trust automatic food delivery to choose the choicest
kiwis).  Life will be somewhat richer, you will remember more details
and be able to plan better what you want to do.  But most of the
benefits won't happen for the same reason that they don't when I am
sitting in my room, i.e., just because I have a book on my shelf that
has the perfect quote in it doesn't mean I will think to check to see
if there is anything appropriate in that book.  Although the room
creature has more information access than the street creature, it is
poorly integrated with the extra information.

However, once the augmentation is internalized in the sense that
information stored in the hardware is indistinguishable from
information stored in the wetware, then some fundamental changes will
occur.  One such change will be the transferability of skill.  After
all, you don't make someone a musician by handing them a bassoon and a
few books on music theory.  But if they can directly access the
memories of a professional bassoonist, I suspect they will actually
have the skill of playing a bassoon.  Of course, with current people,
there would still be a problem due to variance from person to person
of muscle tone, but there is no reason to believe that such variance
would still need to exist once things can be engineered to this degree
of precision.  The interesting question will doubtless be how much of
the memories of the bassoonist to transfer before one considers
oneself to have acquired the skill.  It will also be interesting to
find out that certain skills require mutually inconsistant world-views
(i.e., that someone can not simultaneously be a bassonist and a
painter, for example).

I think the world would also be incredibly richer.  When you see a
cloud you will be able to appreciate it simultaneously for its
thermodynamic properties, meterological properties, and artistic
aspects.  All prior clouds you have ever seen will be available for
comparison and everything anyone has ever publically said about clouds
will be available for consideration.  Similarly, with interacting with
other people, you will remember every previous interaction as well as
having a vast common pool of knowledge upon which to draw; hence
greatly reduce the redundancy.  Usenet postings will average 7
characters per message (no headers).

--- BOB (webber@csd.uwo.ca)

rjenkins@.com (Robert Jenkins) (01/01/91)

( I tried posting this idea a few weeks back; Josh, if I succeeded then, do
  not post this. )

Suppose we develop direct brain/computer interfaces.  I imagine this means
that accessing computer memory would seem the same to us as remembering 
things on our own.

Then, if we build computers that can think faster than us, we could link to
them, tell them our problems, then "remember" the computer's solutions.
We could even remember the steps the computer used to reach those solutions.
If we teach the computer to think like we do, how could we distinguish this
from just solving the problems ourselves?

For that matter, we could download all brainstorming, reasoning, and even
judging of alternatives into computers of our own design, then just 
remember the appropriate results.  The bandwidth required for remembering
final results would be fairly small.  The human nervous system would remain
relatively intact, yet people could think (and invent and code) as fast as
the top-of-the-line nanocomputers.
					- Bob Jenkins

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (01/04/91)

In article <Dec.31.18.42.18.1990.26067@athos.rutgers.edu> rjenkins@.com (Robert Jenkins) writes:
>Suppose we develop direct brain/computer interfaces.  I imagine this means
>that accessing computer memory would seem the same to us as remembering 
>things on our own.

This depends, of course, on just how direct "direct" turns out to be.
The more I use computers, the less optimistic I become about the
prospects for ever associating the word "direct" with a solution that
involved a computer. Rather, when I think of involving a computer
in a problem, I picture the street vendor from the film "Life of Brian"
who refused to sell anything outright, but instead insisted to his 
customers: "You must haggle with me."

A "direct" brain/computer interface would be useful, but unless by
"computer" we are talking about something more robust than the word
connotes today, such an interface might be a prescription for insanity.

>Then, if we build computers that can think faster than us, we could link to
>them, tell them our problems, then "remember" the computer's solutions.
>We could even remember the steps the computer used to reach those solutions.
>If we teach the computer to think like we do, how could we distinguish this
>from just solving the problems ourselves?

How does the situation you describe differ from having a very smart
professor following you around and whispering in your ear the solution
to every problem? I think you would be very aware that the professor
was solving the problems, and not you. However, if you knew that you
could *always* count on the professor being there for you, you might
start to internalize her abilities, in a sense. Just as you know that
when you reach your hand out it will grasp things, you might come to
view the professor's intellect as part of your own.

Come to think of it, something like this does happen to every 
person who works in a management-type position. Even if the manager
can't solve the problem himself, he becomes quite good at matching
the abilities of his subordinates to problems.

If we teach the computer to think like we do, the computer may not
be able to report the "steps" it used to solve many (perhaps most) 
problems. That is because we don't usually think in "steps". Or because
the steps we can report do not always capture the essence of 
problem-solving, whatever that is.  For example, suppose you solve a 
problem by numerically integrating an equation that you derived from 
first principles. You can describe the steps you took. You probably 
can't describe exactly how you inferred, from the problem statement, 
that these were the steps to take. You don't have to look very far
into your own thought processes before you see no further conscious
"steps". At some point, and a very close one, you just "know".

>For that matter, we could download all brainstorming, reasoning, and even
>judging of alternatives into computers of our own design, then just 
>remember the appropriate results.  The bandwidth required for remembering
>final results would be fairly small.

This is only true if the computer is super-intelligent enough to feed
us only the appropriate final results. And also if the problems we
are considering have succinct answers. Many interesting problems do
not. For example, how does one fly to Jupiter? I don't think a plan
to get to Jupiter is going to be very short, at least in light of
currently-available technology. Someday, we may have technological
infrastructure that makes flying to Jupiter as simple as flying
to France. But you can't make a very good name for yourself
today by solving the problem of flying to France (the way anybody
else does). Therefore I suspect we will always want to solve the 
remaining problems that lack succinct solutions.

> The human nervous system would remain
>relatively intact, yet people could think (and invent and code) as fast as
>the top-of-the-line nanocomputers.

I have no doubt that computers can increase intellectual efficiency,
but the greatest progress to date has been in solving routine,
repetitive problems. Invention does involve some repetitive steps, but
much of it seems unique to every new problem and domain.

Since we do not seem able to discover any generally applicable 
principles of invention and discovery, we have an enormous disparity
between intellectual leverage in well-understood vs. poorly-understood
domains. To see this, one only needs to try to solve a problem that
nobody knows how to solve today. Once a generation of scientists and
engineers have trampled a problem domain sufficiently, an average
person can work wonders in it. But at the frontiers, our productivity
is very low. Years seem to go by in which many workers generate
only enough results collectively to fill a few college course hours.


--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

gd@dciem (Gord Deinstadt) (01/04/91)

rjenkins@.com (Robert Jenkins) writes:


>Then, if we build computers that can think faster than us, we could link to
>them, tell them our problems, then "remember" the computer's solutions.
>We could even remember the steps the computer used to reach those solutions.
>If we teach the computer to think like we do, how could we distinguish this
>from just solving the problems ourselves?

That sounds to me like just what our brains do right now.  My subjective
understanding of consciousness is that it is a mechanism for creating a
linear, coherent memory of a sequence of states out of the parallel,
unsynchronized outputs of different parts (or virtual parts) of the
brain.

If this is so, then we are already set up to merge in data from external
sources.  In that case it is quite reasonable to imagine spinning off
a software task, which migrates over the computer network finding the
data it needs, then having it return to my brain with the result,
and me never being aware of the difference.

However, I wouldn't risk a network connection to my brain.  Think of the
danger of viruses; whole populations gone mad.  Or docile.  Or having 
their memories modified.
--
Gord Deinstadt  gdeinstadt@geovision.UUCP

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (01/07/91)

In article <Jan.3.23.34.35.1991.3633@athos.rutgers.edu> cunews!cognos!geovision!gd@dciem (Gord Deinstadt) writes:
>However, I wouldn't risk a network connection to my brain.  Think of the
>danger of viruses; whole populations gone mad.  Or docile.  Or having 
>their memories modified.

Think of the danger of biological viruses that exist today. Do you
live in a hermetically sealed bubble?

Defending against computer viruses is much easier than defending
against biological viruses. Biological viruses have many avenues of 
entry into your body, and you are unable to close all of them.
Computer viruses have only one avenue of entry: you download a
set of instructions and run them. Also, when a biological virus
appears, there is nobody we can potentially locate and throw in jail.

Therefore, you will allocate some of your computer resources to a
logical immune system, just as you today allocate some of your
biological resources to a biological immune system. Here is one major
advantage your logical immune will have: it can learn from the
experiences of other entities. Your biological immune system is
entirely self-contained, making it robust and reliable in isolation.
But the downside is that your biological immune system does not
learn from the experiences of others. Every single biological 
immune system must be exposed to an invader before it can develop
resistance to that invader. (Actually, some learning does occur,
via selective breeding; people with genetic susceptibility to infections
tend to die off before reproducing (at least historically this was
true). But this is completely useless to you once you have been born.)

I suspect, however, that just as the AIDS virus succeeds by 
exploiting a vulnerability of the biological immune system, so
too will some computer viruses be able to exploit vulnerabilities
of the very systems that guard against other viruses. However, 
computer code is so much easier to analyze and work with (compared
to protoplasm) that I think a great advantage accrues to the 
defender.

Consider the total information content of the human organism. It
is much greater than the total information content of a virus. Yet
the virus can kill the human easily, because the whole information 
content of the human can't be flexibly brought to bear against
the virus.


--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

russ@sharkey.cc.umich.edu (01/07/91)

dciem!cunews!cognos!geovision!gd (Gord Deinstadt) writes:

> However, I wouldn't risk a network connection to my brain.  Think of the
> danger of viruses; whole populations gone mad.  Or docile.  Or having 
> their memories modified.

To a certain degree, that danger has existed for thousands of years.
Fanatic movements, propagated by language, have infected many and
caused the madness, death and murder of millions.  To the extent
that history is re-written by such movements, the "memory" of
"society" is modified.  Meat-brain memory is very imperfect now.

Granted, this is slower than the nanotech network-brain equivalent,
but so are meat brains.  Further, the science of cryptography is
sufficiently advanced to allow tamper-proof signatures and other
verifying information on all digital documents, including recorded
memories.  This is much better than we can do for memories in meat
brains.  I believe that proper design will avoid the possibility
of new dangers of tampering and get rid of some of the old ones
(re-writing history, when everyone's "memory" is perfect and
comprehensive).  Dangers of fads and mass movements may well
remain with us.
--
Russ Cage, Robust Software Inc.                 |russ%rsi@sharkey.cc.umich.edu
						|russ@m-net.ann-arbor.mi.us
+h

gd@dciem (Gord Deinstadt) (01/10/91)

dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:

>[I wrote]
>>However, I wouldn't risk a network connection to my brain.  Think of the
>>danger of viruses; whole populations gone mad.  Or docile.  Or having 
>>their memories modified.

>Think of the danger of biological viruses that exist today. Do you
>live in a hermetically sealed bubble?

My brain does.  Ever heard of the blood-brain barrier?
Of course it's not a hermetic seal, it's a selective membrane, but
it is designed to physically screen out viruses and other nasties.
They're too big to get through the pores.

Recent AIDS research reveals that it is imperfect, however. :(

>Defending against computer viruses is much easier than defending
>against biological viruses. Biological viruses have many avenues of 
>entry into your body, and you are unable to close all of them.
>Computer viruses have only one avenue of entry: you download a
>set of instructions and run them.

We are talking about a neural network here.  In a neural network there
is no distinction between data and program.  Also, in the context which
you trimmed out, the downloaded data was to appear to me as my own 
thought.  If the thought is "politician X is God" and I believe that
this revelation about Him came from within me, then the damage has
been done.  I don't know if this qualifies as a virus, but inasmuch
as it profoundly alters my subsequent behaviour it certainly acts like
one.

It needn't be anything so obvious, anyway.  Just edit the history books
and supply everyone with new "facts".  If done gradually probably noone
would notice.  Tyrants try to do this all the time, and to some extent
they succeed even though they don't have direct access to the brain.
Even sincere people do it, convincing other people and themselves that
it didn't really happen that way... if your memories are in someone
else's care, you are no longer free.

> Also, when a biological virus
>appears, there is nobody we can potentially locate and throw in jail.

Which might deter the small fry, but the danger is the big fish,
like politician X.

>Therefore, you will allocate some of your computer resources to a
>logical immune system, just as you today allocate some of your
>biological resources to a biological immune system.

Can you suggest a mechanism, in principle, for such a thing?
The only thing I can think of is to somehow label data as coming
from "out there".  Perhaps the only safe connection is via the senses,
ie. you hear the computer's voice.  But this severely limits what
it can do for you.

> Here is one major
>advantage your logical immune will have: it can learn from the
>experiences of other entities. Your biological immune system is
>entirely self-contained, making it robust and reliable in isolation.

Actually, our immune systems no longer function in isolation, since
the invention of vaccines.  Our immune systems are now part of a
system that includes all the research laboratories working on
communicable diseases, as well as the health care delivery system.
Sometimes this system provides an avenue for biologic attack, as
for example the spread of AIDS through blood transfusions and
(in the third world) unsterile needles.  However it is a physically
partitioned system and that provides a great deal of protection.
By the same token I might well accept an ROM library implant in
my head; it's the network connection I would reject.  I want to
control when that library changes.

>computer code is so much easier to analyze and work with (compared
>to protoplasm) that I think a great advantage accrues to the 
>defender.

Ah, but so far the pathogens our bodies have had to deal with have
been created by blind evolution, not by intelligent and hostile
entities.
--
Gord Deinstadt  gdeinstadt@geovision.UUCP

ward@tsnews.convergent.com (Ward Griffiths) (01/12/91)

dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:


>Therefore, you will allocate some of your computer resources to a
>logical immune system, just as you today allocate some of your
>biological resources to a biological immune system. Here is one major
>advantage your logical immune will have: it can learn from the
>experiences of other entities. Your biological immune system is
>entirely self-contained, making it robust and reliable in isolation.
>But the downside is that your biological immune system does not
>learn from the experiences of others. Every single biological 
>immune system must be exposed to an invader before it can develop
>resistance to that invader. (Actually, some learning does occur,
>via selective breeding; people with genetic susceptibility to infections
>tend to die off before reproducing (at least historically this was
>true). But this is completely useless to you once you have been born.)

Actually, this is an area where technology has already had a 
profound impact on biological mechanisms.  For about two 
centuries, we have had increasing abilities to prepare the 
immune system against attacks by specific invaders by the use 
of immunization.  This is fundamentally equivalent to having 
one body learn from the experiences of others.  Admittedly, the 
process is a step removed, and innoculation is rather like 
giving an AI a virus and letting it find a cure by itself as 
the body has to, rather than a direct transplant of a mass of 
antibodies produced in another body being like having a 
previously debugged anti-viral program transferred from the 
net.  With appropriate nanotech, the antibodies could be 
directly mass-produced in fact, removing the (small but 
existent) risk of infection by an immunizing agent.

<( Looking at the above paragraph, I see that I am rambling 
badly.  I hate product deadlines and their associated fatigue 
poisons.  There's another area where AI up/downloading and 
nanomachine biological maintenance would come in real handy. )>

-- 
The people that make Unisys' official opinions get paid more.  A LOT more.      Ward Griffiths, Unisys NCG aka Convergent Technologies
===========================================================================          To Hell with "Only One Earth"!  Try "At Least One Solar System"!

How many years must some people exist, before they're allowed to be free?  PP&M           If they have to wait until they're allowed, they never will be.  Me

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (01/12/91)

In article <Jan.9.17.27.37.1991.14240@athos.rutgers.edu> cognos!geovision!gd@dciem (Gord Deinstadt) writes:
>dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:
>>Think of the danger of biological viruses that exist today. Do you
>>live in a hermetically sealed bubble?
>
>My brain does.  Ever heard of the blood-brain barrier?

Yes. However, your body has quite a number of failure modes that do
not require its compromise. Your brain is reasonably sealed today,
but nobody has yet out-lived their body.

>>Defending against computer viruses is much easier than defending
>>against biological viruses. Biological viruses have many avenues of 
>>entry into your body, and you are unable to close all of them.
>>Computer viruses have only one avenue of entry: you download a
>>set of instructions and run them.
>
>We are talking about a neural network here.  In a neural network there
>is no distinction between data and program.

My mistake, then. I assumed we were talking about some sort of
a high-fidelity neural-network simulation running on top of a relatively
conventional computer (with adequate speed, etc.). The underlying
computer will certainly (?) distinguish between program and data,
if only to guard against the very possibility you fear. You could,
for example, elect to buffer your neural network brain against
fresh infusions of outside information by running them first in
an isolated simulation. 

A neural network doesn't distinguish between data and program, but
certainly the physical organization of our brains implies a distinction
between network topology and sensory data. If this were not so, then
perfect brainwashing would be possible. I.e., merely subject a person
to the correct input data stream (sensory stimuli), and program their
mental state to your liking. While people certainly can be persuaded
of some things, limits seem to exist. For example, my argument is
not likely to persuade you :-)

And even if perfect brain-programming-via-sensory-stimuli is possible,
consider what it implies: essentially drowning out all competing
stimuli. Really effective brainwashing requires physically confining
a victim and wearing down their resistance through physical stress.
I expect your neural-network simulation would be similarly
robust and defensive. For an invader to sneak in and corrupt your
thoughts, it would have to mount an all-out attack. Since you
would be starting with a substantial stock of running programs,
redundant processes checking each other, and many connections to the
outside, an invader might have to expend more resources than you
have. Remember, if you start acting "too funny", your networked
friends will notice. Essentially, they will be checking your behavior
against their own gross models of your past behavior. This is, of
course, pretty much how we detect the onset of mental disease today.

Could someone corrupt you and all your friends at the same time? And
all their friends? (Remember, some of your "friends" can be sub-processes
running on your computer.) Possibly, but then we reach the fundamental
realities of evolutionary competition. One brain, or a small group of
brains, are most unlikely to be able to stay uncorrupted while they
corrupt all the other brains. 

Recall the Fundamental Theorem of Conspiracy:

"Every organization is infiltrated by clandestine agents from some
competing organization. This competing organization is, recursively,
infiltrated by clandestine agents from some other organization. In
other words, 'Big spooks have little spooks upon their backs to
bite them...'"

The only thing that keeps us sane is this: no organization can commit
100% of its resources to infiltrating other organizations. For an
organization to send out 1 clandestine agent, it must maintain some
X supporting workers, where X>1. Thus, any infiltrating organization
must itself have some organization susceptible to infiltration. Therefore
we have competition.

>It needn't be anything so obvious, anyway.  Just edit the history books
>and supply everyone with new "facts".  If done gradually probably noone
>would notice.  Tyrants try to do this all the time, and to some extent
>they succeed even though they don't have direct access to the brain.

Hold on! Tyrants succeed *precisely* because they are exploiting
*information poverty*. If the cost of processing information is very
high, then the power to do so accumulates into a few hands. However,
as the cost of processing information drops, the power to process
information *necessarily* distributes.

When books cost a fortune to own or create, then only the privileged 
few can own or create them. Not surprisingly, the privileged few 
exploit their information power to maintain their privilege. But make 
books so cheap that everybody can own and create them, and the 
privileged few can no longer hope to control what will be in those 
books. Remember, democracy resulted from a technological innovation
that reduced the cost of information power: the printing press.

>Even sincere people do it, convincing other people and themselves that
>it didn't really happen that way... if your memories are in someone
>else's care, you are no longer free.

But I don't see why your memories should be in someone else's care.
You can make all the backups you want, stick in all the CRC you like,
maintain a whole sequence of progressively more approximate 
models which check each other, etc. You aren't going to leave yourself
wide open. You'll have an immune system.

Your body doesn't leave itself open. It has multiple lines of defense 
against invaders. A big gob of protoplasm like you or me looks like
a hell of a food source to all the microbes out there. But the dinner
is quite capable of striking back.

What if your immune system gets corrupted? Well, it does happen (e.g.,
AIDS). However, this gets back to the Fundamental Theorem of Conspiracy.
Not *everybody* can simultaneously be totally corrupted. Well, maybe. :-)

>> Also, when a biological virus
>>appears, there is nobody we can potentially locate and throw in jail.
>
>Which might deter the small fry, but the danger is the big fish,
>like politician X.

I believe you are still thinking against the historical backdrop of
*information poverty*. When people can't process enough information
to support their basic needs, then division of labor is necessary, 
and power accumulates into a few hands. The rise of information power
will tear down the social structures that today grant politician X 
the power to abuse. Just watch. Heck, just look at Eastern Europe.
You don't even need much information power at all.

When everybody has tera-MIPS or whatever, how in the world is
politician X going to get away with anything? What's he going to
have, tera-tera-MIPS?

>>Therefore, you will allocate some of your computer resources to a
>>logical immune system, just as you today allocate some of your
>>biological resources to a biological immune system.
>
>Can you suggest a mechanism, in principle, for such a thing?

Yes. 

1. You maintain a series of approximate representations of your
brain state. Each of these connects with the outside world via
a restricted interface. Your innermost "self" obtains outside
information only after a time delay, and only through the indirect
agency of your approximate selves. Each layer constantly monitors
neighboring layers for damage and/or suspicious behavior, and doesn't
pass anything inward until it has satisfied itself that everything
is fine.

The royalty of old had "royal tasters". To guard against being poisoned 
by conspirators, the royalty had someone else taste all their food 
first.

Similarly, an advancing army always sends out scouts and patrols. No
doubt a lot of scouts get killed. But this is the price the army
pays to learn about threats.

2. You maintain multiple outside connections which compete with each
other. Today, if you start to go insane, your friends will notice
and become concerned. Your friends are unlikely to go insane at
exactly the same time that you do.

>The only thing I can think of is to somehow label data as coming
>from "out there".  Perhaps the only safe connection is via the senses,
>ie. you hear the computer's voice.  But this severely limits what
>it can do for you.

Safety always limits. However, death is considerably more limiting.
You can't compare your practical capability with some ideal capability
that would exist if you had no outside threats. You always have 
outside threats. The cost of those threats is not your money, any
more than the taxes Uncle Sam takes out of your check is your money.

Think of how much cheaper a house could be if we could be sure the
weather would never get ugly. Well, the weather *does* get ugly.

>By the same token I might well accept an ROM library implant in
>my head; it's the network connection I would reject.  I want to
>control when that library changes.

How do you know the ROM wouldn't have "back doors"? :-) 

In any case, even with a network connection, "you" shouldn't have
any trouble controlling when your library changes. Every interface
between two systems is restricted in some way (or else they wouldn't
be two systems).

>>computer code is so much easier to analyze and work with (compared
>>to protoplasm) that I think a great advantage accrues to the 
>>defender.
>
>Ah, but so far the pathogens our bodies have had to deal with have
>been created by blind evolution, not by intelligent and hostile
>entities.

However, that blind evolution has quite a head start. Despite all our
hostility and intelligence, we are not yet capable of constructing,
from simple chemical reagents, a biological invader as effective as
the AIDS virus. 

Remember, most conflicts between humans are peer-to-peer conflicts.
In virtually every sustained war, neither side maintains a unilateral,
overpowering technological edge. Whenever one side invents the
"secret weapon" that will grant them supremacy, the other side gets
it shortly thereafter. That is because engineering is harder than
reverse engineering. The spread of information power tends to
level the field, not create more pockets of concentrated threat.





--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171