[net.ai] Ultimate limit on computing speed

okeefe.r.a.%edxa@sri-unix.UUCP (11/08/83)

From:  O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>

--------
    There was a short letter about this in CACM about 6 or 7 years ago.
I haven't got the reference, but the argument goes something like this.

1.  In order to compute, you need a device with at least two states
    that can change from one state to another.
2.  Information theory (or quantum mechanics or something, I don't
    remember which) shows that any state change must be accompanied
    by a transfer of at least so much energy (a definite figure was
    given).
3.  Energy contributes to the stress-energy tensor just like mass and
    momentum, so the device must be at least so big or it will undergo
    gravitational collapse (again, a definite figure).
4.  It takes light so long to cross the diameter of the device, and
    this is the shortest possible delay before we can definitely say
    that the device is in its new state.
5.  Therefore any physically realisable device (assuming the validity
    of general relativity, quantum mechanics, information theory ...)
    cannot switch faster than (again a definite figure).  I think the
    final figure was 10^-43 seconds, but it's been a long time since
    I read the letter.


     I have found the discussion of "what is intelligence" boring,
confused, and unhelpful.  If people feel unhappy working in AI because
we don't have an agreed definition of the I part (come to that, do we
*really* have an agreed definition of the A part either?  if we come
across a planet inhabited by metallic creatures with CMOS brains that
were produced by natural processes, should their study belong to AI
or xenobiology, and does it matter?) why not just change the name of
the field, say to "Epistemics And Robotics".  I don't give a tinker's
curse whether AI ever produces "intelligent" machines; there are tasks
that I would like to see computers doing in the service of humanity
that require the representation and appropriate deployment of large
amounts of knowledge.  I would be just as happy calling this AI, MI,
or EAR.

     I think some of the contributors to this group are suffering from
physics envy, and don't realise what an operational definition is.  It
is a definition which tells you how to MEASURE something.  Thus length
is operationally defined by saying "do such and such.  Now, length is
the thing that you just measured."  Of course there are problems here:
no amount of operational definition will justify any connection between
"length-measured-by-this-foot-rule-six-years-ago" and "length-measured-
by-laser-interferometer-yesterday".  The basic irrelevance is that
an operational definition of say light (what your light meter measures)
doesn't tell you one little thing about how to MAKE some light.  If we
had an operational definition of intelligence (in fact we have quite a
few, and like all operational definitions, nothing to connect them) there
is no reason to expect that to help us MAKE something intelligent.