Galway@UTAH-20@sri-unix.UUCP (08/17/83)
From: William Galway <Galway@UTAH-20>
Perhaps AI-digest readers will be amused by the following
article. I believe it's by Danny Cohen, and appears in the
proceedings of the CMU Conference on VLSI Systems and
Computations, pages 124-125, but this copy was dug out of local
archives.
..................................................................
The VLSI Approach to
Computational Complexity
Professor J. Finnegan
University of Oceanview, Kansas
(Formerly with the DP department of the
First National Bank of Oceanview)]
The rapid advance of VLSI and the trend toward the decrease of
the geometrical feature size, through the submicron and the
subnano to the subpico, and beyond, have dramatically reduced the
cost of VLSI circuitry. As a result, many traditionally
unsolvable problems can now (or will in the near future) be
easily implemented using VLSI technology.
For example, consider the traveling salesman problem, where the
optimal sequence of N nodes ("cities") has to be found. Instead
of applying sophisticated mathematical tools that require
investment in human thinking, which because of the rising cost of
labor is economically unattractive, VLSI technology can be
applied to construct a simple machine that will solve the
problem!
The traveling salesman problem is considered difficult because of
the requirement of finding the best route out of N! possible
ones. A conventional single processor would require O(N!) time,
but with clever use of VLSI technology this problem can easily be
solved in polynomial time!!
The solution is obtained with a simple VLSI array having only N!
processors. Each processor is dedicated to a single possible
route that corresponds to a certain permutation of the set
[1,2,3,..N]. The time to load the distance matrix and to select
the shortest route(s) is only polynomial in N. Since the
evaluation of each route is linear in N, the entire system
solves the problem in just polynomial time! Q.E.D.
Readers familiar only with conventional computer architecture may
wrongly suspect that the communication between all of these
processors is too expensive (in area). However, with the use of
wireless communication this problem is easily solved without the
traditional, conventional area penalty. If the system fails to
obtain from the FCC the required permit to operate in a
reasonable domain of the frequency spectrum, it is always
possible to use microlasers and picolasers for communicating
either through a light-conducting substrate (e.g. sapphire) or
through a convex light-reflecting surface mounted parallel to the
device. The CSMA/CD (Carrier Sense Multiple Access, with
Collision Detection) communication technology, developed in the
early seventies, may be found to be most helpful for these
applications.
If it is necessary to solve a problem with a larger N than the
one for which the system was initially designed, one can simply
design another system for that particular value of N, or even a
larger one, in anticipation of future requirements. The
advancement of VLSI technology makes this iterative process
feasible and attractive.
This approach is not new. In the early eighties many researchers
discovered the possibility of accelerating the solution of many
NP-complete problems by a simple application of systems with an
exponential number of processors.
Even earlier, in the late seventies many scientists discovered
that problems with polynomial complexity could also be solved in
lower time (than the complexity) by using number of processors
which is also a polynomial function of the problem size,
typically of a lower degree. NxN matrix multiplication by
systems with N^2 processors used to be a very popular topic for
conversations and conference papers, even though less popular
among system builders. The requirement of dealing the variable N
was (we believe) handled by the simple P/O technique, namely,
buying a new system for any other value of N, whenever needed.
According to the most popular model of those days, the cost of
VLSI processors decreases exponentially. Hence the application
of an exponential number of processors does not cause any cost
increase, and the application of only a polynomial number of
processors results in a substantial cost saving!! The fact that
the former exponential decrease refers to calendar time and the
latter to problem size probably has no bearing on this discussion
and should be ignored.
The famous Moore model of exponential cost decrease was based on
plotting the time trend (as has been observed in the past) on
semilogarithmic scale. For that reason this model failed to
predict the present as seen today. Had the same observations
been plotted on a simple linear scale, it would be obvious that
the cost of VLSI processors is already (or about to be) negative.
This must be the case, or else there is no way to explain why so
many researchers design systems with an exponential number of
processors and compete for solving the same problem with more
processors.
CONCLUSIONS
- With the rapid advances of VLSI technology anything is
possible.
- The more VLSI processors in a system, the better the paper.