[sci.nanotech] Intractability of active-shield testing

dmo@turkey.philips.com (Dan Offutt) (06/21/89)

Suppose that AI-based design systems that can think a million times as
fast as a human designer become possible, inexpensive, and numerous.
What changes would this imply in the rate of technological advance?

It seems clear that there will be *some* increase in the rate of
technological advance.  But the increase will be much less than
proportional to the hardware speedup obtained.  Million-times-faster
designers cannot bring in one year the designs that unspeeded
designers would bring in a million years.  One reason, briefly, is
that a speedup in conscious design cannot serve as a substitute for
real-world testing of design realizations.  Real-world testing takes
time, cannot be speeded up without substantial risk, and produces
empirical data about design performance that cannot be obtained in
any other way and which is a critical ingredient in subsequent
design efforts.

For example, the testing of a particular make and model of automobile
is performed by consumers who drive the automobiles through precisely
the environment to which it must be fit, if the design is to be a
success.  This testing process produces a steady stream of feedback to
designers: consumer complaints about performance and asthetics, manner
of failure in accidents, repair rates and types, median useful
lifetime and so forth.  This information is invaluable in uncovering
*in-principle-unpredictable* design flaws.  Testing cannot be speeded
up: One must wait patiently for consumers to slowly generate
design-performance data as they go about their everyday driving
activities.

The objection may be raised that design flaws are predictable by
simulation or simplified mock-ups of real environments.  But many
design flaws are still unpredictable because design failure can be a
function, in part, of almost anything the environment to which the
design must be fit.  And complete information about such environments
is never available to the designer, simulation programmer, or mock-up
builder.

These remarks apply to designs in general, and nanomachine designs in
particular.  Nanomachines are likely to be more complex than
present-day machines (holding size constant).  In general, the more
complex the machine, the more difficult it will be to predict its
interaction with the environment to which it must be fit.
Consequently, collecting performance data during the testing phase
will be at least as important for nanomachines as it is for today's
machines.  Thus the time required for testing nanomachines will limit
the rate of nanotechnological progress to much less than might be
suspected, given the availability of a million-fold speedup in the
speed of AI-based design programs.

These observations apply to the distributed nanomachines called active
shields.  If a prototypical active shield is not tested in the
real-world then many or most of its design flaws will not be
identified.  If it is tested in the real world under the actual
noxious conditions it is supposed to protect against, then it is
already too late to be of help.  If it is tested in a scaled-down
sealed ecosystem (a sealed greenhouse, for example), then any
characteristic of the complete environoment not present in that
scaled-down enviroment is a potential source of design failure.
Simulations are even more unsatisfactory.  It follows that active
shields are less likely to be ready in time to protect against the
replicating nanomchines that will inevitably be unleashed into the
environment.



Caveats:

For a given design, some types of quality feedback will come earlier
and some later.  The lifetime of some automobiles is ten years.
Certain facts about such automobiles are discovered only during the
tenth year, and not earlier.

One may invest more or less effort in acquiring information about the
environment to which one's design must be fit.  There is the issue of
which types of feedback to seek out.  The building of a model of this
environment is a resource consuming task itself.

There is the issue of how much small errors or incompletenesses in the
designer's information about the target environement affect the
success of a design.  Personally, I suspect that seemingly
insignificant details about an artifact's environment can often have a
very large impact upon success, especially if those details have a
long period of time over which to act.

The strictly-internal interactions among the components of a design
can be complex enough to make the success of a design unpredictable
even when the environment is both simple and fully understood.
Consider the failure of Apollo 13.  Empty space is a very simple
environment.  A Saturn-V rocket is fairly complex.

There is the question of whether the design realization can be neatly
distinguished from its environment.  Design affects choice of
environment since different artifacts will be sorted into different
niches.  Sports cars are sorted into different niches in the economy
than passenger sedans.  Sports cars end up in accidents more frequently
than passenger sedans.


Dan Offutt
dmo@philabs.philips.com

dmocsny@uceng.uc.edu (daniel mocsny) (06/22/89)

In article <Jun.20.23.27.17.1989.28085@athos.rutgers.edu>, dmo@turkey.philips.com (Dan Offutt) writes:
> Suppose that AI-based design systems that can think a million times as
> fast as a human designer become possible, inexpensive, and numerous.
> What changes would this imply in the rate of technological advance?

It would change everything in ways we can hardly imagine at present.
But we can amuse ourselves by speculating, and arguing ;-)

> ... the increase will be much less than
> proportional to the hardware speedup obtained.  Million-times-faster
> designers cannot bring in one year the designs that unspeeded
> designers would bring in a million years. One reason, briefly, is
> that a speedup in conscious design cannot serve as a substitute for
> real-world testing of design realizations.  Real-world testing takes
> time, cannot be speeded up without substantial risk, and produces
> empirical data about design performance that cannot be obtained in
> any other way and which is a critical ingredient in subsequent
> design efforts.

OK, but hold on a second! Think about all the data consumers generate
every day that vendors have no choice but to ignore because (1) they
can't handle the data volume (2) no communication systems are in place
to make gathering the data easy and (3) the data is unavailable for
political reasons (e.g., trade secrets, inter- and intra-corporate
rivalry). If we grant your original premise, that mechanical
super-intelligence is cheap and ubiquitous (and further assume that
humans will be able to stay on top of it!), then vendors will have
*vastly* increased ability to gather data and use it. Similarly,
consumers will have a vastly increased ability to record and report
complaints.

Even if real-world data doesn't get generated any faster, if we simply
start using a vastly larger portion of the data now going to waste,
product improvements will speed up drastically. Think of all the
millions of consumers out there using all of those products. How much
time passes now before major design flaws filter back to the vendors
and are corrected? Too much. Similarly, enormous amounts of data are
already available for every major product category one cares to name.
I suggest that most of what a present-day vendor will learn from a
product-testing program must already be available in principle. Here
we can divide the data into essence and accident. The accidents are
all those things you should have already known (for example, so many
automobiles have been sold that by now the general outline of consumer
preference should not be any great surprise), whereas the essence is
whatever really is new about the product and heretofore untested.

With massive increases in data-gathering and -handling power, vendors
will be able to greatly increase their efficiency in designing
products that work the first time. But we will observe even more
fundamental changes. For example, if we had super-intelligent
machines, we would probably not use them to build better automobiles.
Instead, we would no longer need the present levels of automobile use,
because the existence of such machines would imply the existence of
communication technology fast and transparent enough to make most of
our present physical travel a waste of time.

When the jet engine appeared, nobody tried to mount one on a horse.
New technological capabilities do not always help you do better what
you are already doing. Instead, they often push you into doing
entirely new things.

Dan Mocsny				Snail:
Internet: dmocsny@uceng.UC.EDU		Dept. of Chemical Engng. M.L. 171
513/751-6824 (home)			University of Cincinnati
513/556-2007 (lab)			Cincinnati, Ohio 45221-0171

macleod@drivax.UUCP (MacLeod) (06/22/89)

In article <Jun.20.23.27.17.1989.28085@athos.rutgers.edu> dmo@turkey.philips.com (Dan Offutt) writes:

>These remarks apply to designs in general, and nanomachine designs in
>particular.  Nanomachines are likely to be more complex than
>present-day machines (holding size constant).  In general, the more
>complex the machine, the more difficult it will be to predict its
>interaction with the environment to which it must be fit.

I would sleep better if all engineers, of every discipline, read a slender
volume called "Systemantics" by a medical doctor named John Gall.  It is 
a short, humerous series of essays exploring a number of empirically derived
axioms about system behavior.  Like its predecessor, "The Peter Principle", 
it is actually profound truth wrapped in humor.

Gall shows that as system complexity grows the possibilities - and
likelihood - of anomalous behavior increases, presumably as some
function of the number of machine states.  The larger the system, the
more it tends to impede its own functioning.  The examples Gall cites
as climax designs often perversely generate exactly the problem
they were originally designed to surmount - the classic example is the
mammoth VAB at Cape Canaveral.  Built to protect Saturn V components
from the weather, it generates its own rain internally. 

Michael Sloan MacLeod  (amdahl!drivax!macleod)

[This is often called the "law of unintended effect" and applies to 
 almost any complex system, not just engineered mechanisms.  Indeed,
 it applies less to engineered machines than to most other complex
 systems.  The VAB really does protect rockets from the strong winds
 that are common on the Florida coastline.  However, purchasing 
 departments and their regulations typically cause organizations
 to spend twice as much for what they buy.  Expanded legal liability
 for manufacturers and doctors cause talented people to leave the 
 field, and safety oriented products and medicines to be withdrawn.

 The more complex something is, the greater the chance its design 
 and production will be done by committee and bureaucracy.  This
 is the major reason for the more-than-linear decrease in reliability
 and effectiveness with size.

 There is some reason to hope that for engineered machines, AI systems
 will have their biggest impact simply by letting bigger projects be
 handled by a single individual.  Furthermore, I'll wager that the 
 first corporation to replace its *management* with a computer program
 will wipe up the competition in no time flat.

 Of course, as I have noted here before, there are some dangers inherent
 with trying the same thing with the government...

 --JoSH]

alan@oz.nm.paradyne.com (Alan Lovejoy) (06/24/89)

In article <Jun.20.23.27.17.1989.28085@athos.rutgers.edu> dmo@turkey.philips.com (Dan Offutt) writes:
>...  Million-times-faster
>designers cannot bring in one year the designs that unspeeded
>designers would bring in a million years.  One reason, briefly, is
>that a speedup in conscious design cannot serve as a substitute for
>real-world testing of design realizations.  Real-world testing takes
>time, cannot be speeded up without substantial risk, and produces
>empirical data about design performance that cannot be obtained in
>any other way and which is a critical ingredient in subsequent
>design efforts.

Ok.  You make a good point.  But I have three quibbles:

1) So what?  The "million times speed-up" is a VERY conservative estimate of
the possible increase in computing speeds due to nanotechnology, quantum
circuits and other as-yet-unknown advances.

What does it matter if the effective speed-up in technical progress which is 
practically obtainable is only 10**5?  The thrust of Drexler's argument is not 
mortally wounded simply by a one or two order of magnitude overestimation.

2) The problems which you rightly identify as having a significant dampening
effect on the rate of progress which is practically achievable do not apply
to the same degree to ultra-intelligent full-spectrum AIs.  Your arguments
hit with full force only in the case of idiot savant machines, not in the
case of intelligences which transcend our own in all ways.  Such intelligences
will be able to find elegant solutions to many problems that we find hard,
intractable or even do not see at all.  They are really beyond our ability
to predict or understand.  We are like salamanders trying to envision the
problem-sovling skills of homo sapiens.

3) All the problems which you have listed that hinder the development of
active shields also work against the designers of gray goo.  The goo has to
both defend itself against the shield and try to accomplish its main objective
of destroying the enviroment.  The shield must defend itself against the goo
and attempt to destroy and/or incapacitate the goo. Both may masquerade as
the other.  Both will attempt to interfere with the other's communications.
Assemblers and disassemblers will mindlessly follow whatever program they
receive, regardless of its origin.  The shield may be able to shut the goo
down just by jamming the goo's communications, destroying its energy supplies
and immobilizing ALL molecules in the affected area, including those needed
or used by the goo, in such a way that nothing of importance (e.g., human 
bodies) is injured beyond the ability of nanomachines to repair.  It is VERY
difficult to win a game of chess against an opponent who is determined to
achieve a draw, unless the player who wants to win is much better than the
player who wants merely not to lose.  The shield "wins" merely by making 
it impossible for nanomachinery to function, or at least by making it
impossible for nanocomputers to send commands to assemblers and disassemblers.

Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
______________________________Down with Li Peng!________________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.