[sci.math] analog models of computation

rpjday@watrose.UUCP (rpjday) (10/15/86)

  I am interested in collecting examples of problems in the area of
computation that are generally acknowledged to be reasonably to
obscenely difficult using the digital computer model, but which have
simple elegant solutions if one was allowed to construct some form
of "analog" computer.  
  As an example, the problem of finding the shortest path between two
points in a graph is easily solved if one is allowed to build a string
model of the graph, then pick it up by the source node, and measure the
distance straight down to the target node.  I'm sure everyone is familiar
with this trick, but I'm also sure there were other "analog" models
of computation, some for supposedly intractable problems.
  Can anyone out there recall others, and mail them to me, or post
them to the net?  I will eventually summarize and repost to net.math
(or sci.math, if that's what it is being renamed to).
  Thanx muchly.



______________________________________________________________________
R Day					rpjday@watrose.uucp
CS Department                         rpjday%watrose@waterloo.csnet
U. of Waterloo       rpjday%watrose%waterloo.csnet@csnet-relay.arpa

bradley@think.COM (Bradley Kuszmaul) (10/15/86)

In article <8194@watrose.UUCP> rpjday@watrose.UUCP (rpjday) writes:
>
>  I am interested in collecting examples of problems in the area of
>computation that are generally acknowledged to be reasonably to
>obscenely difficult using the digital computer model, but which have
>simple elegant solutions if one was allowed to construct some form
>of "analog" computer.  
>  As an example, the problem of finding the shortest path between two
>points in a graph is easily solved if one is allowed to build a string
>model of the graph, then pick it up by the source node, and measure the
>distance straight down to the target node.  I'm sure everyone is familiar
>with this trick, but I'm also sure there were other "analog" models
>of computation, some for supposedly intractable problems.
> ...
>R Day

Summary of the rather long article which follows:
  (1) Finding the distance betwen two points in a graph can be solved
      efficiently on a digital computer.
  (2) The analog computer you propose must grow as fast as the
      problem (the crane to lift the string must grow).  If I am
      allowed to build a digital computer which also grows then I can keep
      up with your analog computer by exploiting parallelism (e.g.
      using a Connection Machine (tm) computer system.)
  (3) The analog computer you propose does not even work.

Point (1):

I don't beleive that your example satisfies the requirements.  Finding
the shortest path between two points using a digital computer is
nowhere near obscenely difficult.  In fact it can be done in time
which is only logarithmically worse than linear in the size of the
graph by the following algorithm:

  To find the shortest distance (and in fact find a shortest path)
  from point A to point B in a (we'll even make it directed) graph
  G=(V,E) [vertices and edges]:
   Set each vertex's value to INFINITY
   Set set vertex A's value to 0.
   Let P be a priority queue, initialize P to contain A only.
   loop until P is empty
     Let V:= the smallest vertex in P (removing it from P)
     if V=A we are done.
     mark V as "VISITED"
     For each edge E from vertex V to vertex U,
       set U's value to MIN(U's old value, V's value + cost-of-edge(E))
       if U is not "VISITED" and U's value just changed,
         then change U's position in the prioty queue to reflect its new value.
     end for
   end loop
   if we get here without V ever equalling A, then "you can't get there
     from here"

Analysis: Clearly we are only going to examine any given vertex at
most once (since we examine a vertex when it is the closest unexamined
vertex, and examining a vertex which was further away is not going to
make the closer vertex any closer than it already was).  We also look
at any given edge at most once, and we recompute U's position in the
priority queue at most once per edge.  Thus the only possible way for
this algorithm to run in more than linear time is if munging around in
the priority queue takes more than constant time.  Well, the best
priority queue algorithms I know take log time for each operation,
giving us (N log N) time behavior for this algorithm.

Point (2)  The analog computer's "engine" grows with the size of the
problem.  If the digital computer is allowed corresponding growth,
then parallelism can keep up:

If we compare this to the problem of building a graph out of string
and measuring it, we need to understand how fast building the graph
really is.  If we assume that graph has been built (which seems fair,
since assume that the input to the algorithm above is ready to go),
then it looks like the string algorithm takes time which is linear in
the diameter of the graph (or some such nonesense).  At any rate, it
looks like the string has gained a linear speedup.  However, this is
not really fair, since there are some assumptions about how to
implement this algorithm that have been ignored:  For example, it
takes a rather large hoist to implement the algorithm for billion
vertex graphs, and we can conclude that the analog computer must grow
as fast as the input grows.

  For the digital computer, the corresponding growth is in the memory
components, but if your are going to charge me for the memory in my
digital computer, I will respond by replacing every memory chip with a
microprocessor chip, so that I can exploit parallelism too.  (Then we
get into an analysis of the performance of parallel algorithms to do
the same thing, and what kind of interconnection network we are
assuming between the processors...)

Point (3)  Another problem with the analog approach to solving this problem is
that it does not work.  Consider a family of graphs with bisection
width which is linear in the number of vertices (e.g. in a binary
hypercube containing N vertices, we have to cut N/2 edges in order to
bisect the graph).  Such a graph, when built out of string, and hung
from a crane, starts getting fat around the center.  The fact that the
string has finite thickness is going to get us.  Some of the strings
will not hang straight down from point A to point B, but will have to
take a slight horizonta  detour to get around the other strings which
are trying to get from point A to point B.


 - bradley@think.com or think!bradley

rpjday@watrose.UUCP (rpjday) (10/15/86)

  OKAY!  So several people have pointed out the two previous articles
on analog models in Scientific American, maybe I can save the net from
carrying the same info a couple hundred more times.  For those who
don't know which issues I refer to, it's June 84 and June 85, the
"Computer Recreations" column.  Thanks to all who responded (and to all
whose responses are still coming in).  ANybody out there with other
sources?



______________________________________________________________________
R Day					rpjday@watrose.uucp
CS Department                         rpjday%watrose@waterloo.csnet
U. of Waterloo       rpjday%watrose%waterloo.csnet@csnet-relay.arpa

chandros@topaz.RUTGERS.EDU (Jonathan A. Chandross) (10/16/86)

/* Message-ID: <8195@watrose.UUCP> From: rpjday@watrose.UUCP (rpjday) */
##>  I am interested in collecting examples of problems in the area of
##>computation that are generally acknowledged to be reasonably to
##>obscenely difficult using the digital computer model, but which have
##>simple elegant solutions if one was allowed to construct some form
##>of "analog" computer.  
##>  As an example, the problem of finding the shortest path between two
##>points in a graph is easily solved if one is allowed to build a string
##>model of the graph, then pick it up by the source node, and measure the
##> distance straight down to the target node.  I'm sure everyone is familiar

The technique you are referring to is called "rubber banding."   While
workable for small problems, the method degrades quite substantially for
large(r) problems.  The problem of finding a shortest path for, say 1000
cities, will degrade to such an extent that one of the known heuristics,
running on a digital computer, will outperform the analog version, both
in accuracy and in time.

The main reason that analog computing no longer flourishes is a result
of 1) inaccuracy in calculations, and 2) difficulty in scaling models.

Hopfield and Tank have come up with some amazing results for neural
networks computing TSP (travelling salesmen problem).  But it is very
difficult to make the networks very large, and the component tolerence
is not terrific.  Fractional percent errors add up quickly.

However, the topic is extremely interesting.  People have built the
seive of Erasthostenes (badly mangled spelling) out of bicycle rims
and wire, and matrix multipliers out of (fundamentally) oatmeal boxes.

Perhaps the topic could be expanded to bizarre analog computation in
general??


Jonathan Chandross
allegra!topaz!chandross

zdenek@heathcliff.columbia.edu (Zdenek Radouch) (10/16/86)

In article <6490@think.COM> bradley@godot.think.com.UUCP (Bradley Kuszmaul) writes:
>In article <8194@watrose.UUCP> rpjday@watrose.UUCP (rpjday) writes:
>>
>>  I am interested in collecting examples of problems in the area of
>>computation that are generally acknowledged to be reasonably to
>>obscenely difficult using the digital computer model, but which have
>>simple elegant solutions if one was allowed to construct some form
>>of "analog" computer.  
>>  As an example, the problem of finding the shortest path between two
>>points in a graph is easily solved if one is allowed to build a string
>>model of the graph, then pick it up by the source node, and measure the
>>distance straight down to the target node.  I'm sure everyone is familiar
>>with this trick, but I'm also sure there were other "analog" models
>>of computation, some for supposedly intractable problems.
>> ...
>>R Day
>
>Summary of the rather long article which follows:
>  (1) Finding the distance betwen two points in a graph can be solved
>      efficiently on a digital computer.
>  (2) The analog computer you propose must grow as fast as the
>      problem (the crane to lift the string must grow).  If I am
>      allowed to build a digital computer which also grows then I can keep
>      up with your analog computer by exploiting parallelism (e.g.
>      using a Connection Machine (tm) computer system.)
>  (3) The analog computer you propose does not even work.

Sorry, my friend, you totally missed the point. The author of the original
posting brought up a very interesting idea, namely, that some problems can
be solved more efficiently on machines that are NOT digital computers.
He has chosen an example to illustrate, what he's looking for, and he certainly
got his idea across. Do you not like his example? It reminds me the recent
FTL disscussion, that ended up with several net.physics readers concluding
that all the confusion was caused by "massless and infinitely rigid rods" and
similar objects, that can't really exist. It's not the models, what counts.
It is HOW they get used!


Summary of the rather long flame which follows:

   (1)	You didn't convince me, that it would be more efficient for ME
	to use the computer insted of the strings.

   (2)	The machine has to grow only because YOU made it too small in the
	first place.

   (3)	The fact, that you can't add 2.3*10^456 + 1.2*10^789 on your calculator
	doesn't mean, that the calculator doesn't work.

>I don't beleive that your example satisfies the requirements.  Finding
>the shortest path between two points using a digital computer is
>nowhere near obscenely difficult....
>
	[The algorithm and its analysis deleted]

Well, you needed 18 lines to describe the algorithm and 10 lines to explain it.
He needed 25 words to describe his method. Then again, maybe I don't know what
"simple and elegant" really means.

>....  In fact it can be done in time
>which is only logarithmically worse than linear in the size of the
>graph by the following algorithm:

His machine solves the problem in a time unit. I don't care if the time
to run the algorithm is logarithmically worse than linear or exponentially
better than quadratic. The algorithm is SLOWER!

>.....For example, it
>takes a rather large hoist to implement the algorithm for billion
>vertex graphs.........
>
>  For the digital computer, the corresponding growth is in the memory
>components, but if your are going to charge me for the memory in my
>digital computer, I will respond by replacing every memory chip with a
>microprocessor chip, so that I can exploit parallelism too.

Gee, that's really good idea! Replace every memory chip with a microprocessor
chip in order to exploit parallelism! 

>Then we get into an analysis of the performance of parallel algorithms to do
>the same thing, and what kind of interconnection network we are
>assuming between the processors...

One thing is for sure. The interconnection network has to be good enough to
hold data for the bilion vertex graph...

>Another problem with the analog approach to solving this problem is
>that it does not work....
>The fact that the string has finite thickness is going to get us.

Whereas the fact, that the word in Connection Machine (tm) computer system
has finite length, is going to get martians.

> - bradley@think.com or think!bradley

The second one is more appropriate.

zdenek

 -------------------------------------------------------------------------
 Men are four:
 He who knows and knows that he knows, he is wise - follow him;
 He who knows and knows not that he knows, he is asleep - wake him;
 He who knows not and knows that he knows not, he is simple - teach him;
 He who knows not and knows not that he knows not, he is a fool - shun him!

 zdenek@CS.COLUMBIA.EDU  or 	...!seismo!columbia!cs!zdenek
 Zdenek Radouch, 457 Computer Science, Columbia University,
 500 West 120th St., New York, NY 10027

tedrick@ernie.Berkeley.EDU (Tom Tedrick) (10/16/86)

>>>  As an example, the problem of finding the shortest path between two
>>>points in a graph is easily solved if one is allowed to build a string
>>>model of the graph, then pick it up by the source node, and measure the
>>>distance straight down to the target node.  I'm sure everyone is familiar
>>>with this trick, but I'm also sure there were other "analog" models
>>>of computation, some for supposedly intractable problems.

>>I don't believe that your example satisfies the requirements.  Finding
>>the shortest path between two points using a digital computer is
>>nowhere near obscenely difficult....

The problem of finding a "shortest path" in a graph is a bad example
to use because it can be solved very quickly (linear time if I remember
correctly. Bondy & Murty give a simple quadratic time algorithm.).

What would be interesting are examples of fast analog algorithms for
NP-hard problems.

I liked the string analogy though. Let's hear some more of these
intuitive algorithms ...

desj@brahms (David desJardins) (10/16/86)

In article <3504@columbia.UUCP> zdenek@heathcliff.columbia.edu.UUCP (Zdenek Radouch) writes:
>Sorry, my friend, you totally missed the point. The author of the original
>posting brought up a very interesting idea, namely, that some problems can
>be solved more efficiently on machines that are NOT digital computers.
>He has chosen an example to illustrate, what he's looking for, and he
>certainly got his idea across.

   I'm afraid not.  Not to me anyway.  To me this example is a perfect illus-
tration of why analog computing is so spectacularly unsuccessful.  Be honest:
if you wanted to solve the shortest-path problem for a graph of 1000 vertices
and 100000 edges, would you prefer to use the analog or the digital method?
It seems obvious that the analog method is completely hopeless -- you could
spend many days building special-purpose hardware to solve the one particular
problem, or you could solve it in a few minutes (an hour at most, including
the time to write the code) on a desktop computer, with a general algorithm
that can be used to solve *all* shortest-path problems, not just a particular
one.

>   (1)	You didn't convince me, that it would be more efficient for ME
>	to use the computer insted of the strings.

   See above.  Would you really use the string model for 1000 vertices and
100000 edges??  I simply don't believe it.  If you really think this is
superior, let's have a race.  I will spend no more for the computer hardware
than you spend on strings and beads, and achieve 1000x the accuracy in 1/1000
of the time.

>   (2)	The machine has to grow only because YOU made it too small in the
>	first place.

   Nonsense.  Both the analog method and the algorithm require linear time
in the number of edges.  The analog method requires this much time to build
the model, or at least to adjust the string lengths to the specified para-
meters.

>His machine solves the problem in a time unit.

   Again, *nonsense*.  It takes at least one time unit per edge to adjust
the length of the string to the specified value.

   -- David desJardins

zdenek@heathcliff.columbia.edu (Zdenek Radouch) (10/17/86)

In article <23@cartan.Berkeley.EDU> desj@brahms (David desJardins) writes:
>In article <3504@columbia.UUCP> zdenek@heathcliff.columbia.edu.UUCP (Zdenek Radouch) writes:
>>Sorry, my friend, you totally missed the point. The author of the original
>>posting brought up a very interesting idea, namely, that some problems can
>>be solved more efficiently on machines that are NOT digital computers.
>>He has chosen an example to illustrate, what he's looking for, and he
>>certainly got his idea across.
>
>   I'm afraid not.  Not to me anyway.

Ok. I was wrong.

>To me this example is a perfect illus-
>tration of why analog computing is so spectacularly unsuccessful.  Be honest:
>if you wanted to solve the shortest-path problem for a graph of 1000 vertices
>and 100000 edges, would you prefer to use the analog or the digital method?

No. But neither me nor he suggested, that the method should be used for large
graphs. I think the method itself (the principle) is very elegant.
Remember how Gauss added all the integers from 1 to n? He simply noticed, that
1 + n = 2 + n-1 = 3 + n-2 etc... That is what I call elegant. 

>It seems obvious that the analog method is completely hopeless -- you could
>spend many days building special-purpose hardware to solve the one particular
>problem, or you could solve it in a few minutes (an hour at most, including
>the time to write the code) on a desktop computer, with a general algorithm
>that can be used to solve *all* shortest-path problems, not just a particular
>one.

I can see, we are talking about two different things. Maybe he hasn't chosen
the greatest example, maybe I misunderstood his intent. This is how I under-
stand his example: Somebody gives you a graph and wants to find the shortest
path. There are many ways to solve the problem. Let's consider these two:

1. Introduce mathematics (coordinates, distances, sorting etc....).
   Build a machine that can use it (digital computer).
   Solve the problem.

2. Make the graph out of strings and beads (duplicate the picture).
   Lift the model.

As I said, I consider (2) to be elegant. Notice that you can figure out the
shortest path without understanding the concept of distances, you don't even
know, what the length of the shortest path is! Clearly, if you used a computer,
you had to calculate the distances and that is something nobody asked for.
The question was WHICH one is the shortest, not how long is the shortest path.

Now, I am not saying that the method is general, let alone practical.
It's just DAMN simple!

>>His machine solves the problem in a time unit.
>
>   Again, *nonsense*.  It takes at least one time unit per edge to adjust
>the length of the string to the specified value.

Sorry, you are wrong. To solve the problem you just lift the model!

zdenek

P.S.
I hate when somebody says "the model is bad".
The models are fine; you just have to use them right!

soussan@ihlpf.UUCP (Soussan) (10/17/86)

> 
>   I am interested in collecting examples of problems in the area of
> computation that are generally acknowledged to be reasonably to
> obscenely difficult using the digital computer model, but which have
> simple elegant solutions if one was allowed to construct some form
> of "analog" computer.  

.
.
.

> R Day					rpjday@watrose.uucp
> CS Department                         rpjday%watrose@waterloo.csnet
> U. of Waterloo       rpjday%watrose%waterloo.csnet@csnet-relay.arpa

There was a fairly good (I think) discussion of this topic in
_Scientific American_ in the "Computer Recreations" column recently
(I'd look it up but I just move and all my SA's are still packed). 
I would guess that the discussion was in one of the early 1986
issues, but it may have been 85 (memory startin' to go...).

It consisted of a discussion and a call for reader's ideas/examples, which
were then revealed in the following month's column.

It sounds like fun, so have some!

Daniel Soussan

desj@brahms (David desJardins) (10/18/86)

In article <3516@columbia.UUCP> zdenek@heathcliff.columbia.edu.UUCP (Zdenek Radouch) writes:
>Now, I am not saying that the method is general, let alone practical.
>It's just DAMN simple!

   Oh, I agree it is pretty.  But it is not an argument for the value of
analog computation, since it is not practical.

>>   Again, *nonsense*.  It takes at least one time unit per edge to adjust
>>the length of the string to the specified value.
>
>Sorry, you are wrong. To solve the problem you just lift the model!

   Sorry, you are doubly wrong.

   1) From this point of view (solving a *particular* graph) a digital
computer can also do it in constant time.  Determine the answer for that
particular graph, and then write a program to print it out.
   Complexity theory only makes sense when applied to a *class* of problems,
parameterized by some parameter (in this case, the "size" of the graph,
measured in one of the several possible ways).  And, the time that it will
take you to solve a problem in this class is omega(n), unless you have an
infinitely large house full of string models of all possible graphs.  Without
such a collection, constructing the model is an inextricable part of the
solution.

   2) It takes linear time for the model to fall into place and indicate the
solution after you lift it.

   -- David desJardins

zdenek@heathcliff.columbia.edu (Zdenek Radouch) (10/19/86)

In article <27@cartan.Berkeley.EDU> desj@brahms (David desJardins) writes:
>In article <3516@columbia.UUCP> zdenek@heathcliff.columbia.edu.UUCP (Zdenek Radouch) writes:
>>
>>Sorry, you are wrong. To solve the problem you just lift the model!
>
>   Sorry, you are doubly wrong.
>
>   1) From this point of view (solving a *particular* graph) a digital
>computer can also do it in constant time.  Determine the answer for that
>particular graph, and then write a program to print it out.

You are very confused about what it takes to find the shortest path between
two vertices in a graph. We are not talking about building models or computers,
drawing graphs, calculating distances, inputting data and printing results.
We are talking about the actual methods of implementing a function:
 min(set of distances). In order to implement the function you have to use
primitive operation COMPARE.

It turns out, that relational operators in mathematics are always applied
to TWO operands, hence your additional time if n>2.

His model together with his eyes happen to perform FULL PARALLEL compare,
therefore time = 1.

As I said, his method might not be practical, but it's elegant and much faster.

>   2) It takes linear time for the model to fall into place and indicate the
>solution after you lift it.

Only when you consider Grurmstipth's force acting on the beads. It's been known
for years, at least in physics, that the time, n falling beads take to fall,
is INDEPENDENT of n.

>
>   -- David desJardins

zdenek

 -------------------------------------------------------------------------
 Men are four:
 He who knows and knows that he knows, he is wise - follow him;
 He who knows and knows not that he knows, he is asleep - wake him;
 He who knows not and knows that he knows not, he is simple - teach him;
 He who knows not and knows not that he knows not, he is a fool - shun him!

 zdenek@CS.COLUMBIA.EDU  or 	...!seismo!columbia!cs!zdenek
 Zdenek Radouch, 457 Computer Science, Columbia University,
 500 West 120th St., New York, NY 10027

postpischil@castor.dec.com (Always mount a scratch monkey.) (10/20/86)

David desJardins makes much ado about the time needed to construct an
analog model but never speaks of the time needed to construct the hardware
required for a digital model.  He speaks of spending many days building
"special-purpose hardware to solve the one particular problem", but we
only need to consider how many days it would require to build the
special-purpose digital hardare to solve the same particular problem to
see the true situation.
 
If he insists upon going into the real world and worrying about
construction time and accuracy, let's consider that the time required to
build anything grows exponentionally with accuracy or size, just because
of the uncertainties of the real world and the inevitability of human
error, and this includes digital computers.
 
> If you really think this is superior, let's have a race.  I will spend no
> more for the computer hardware than you spend on strings and beads, and
> achieve 1000x the accuracy in 1/1000 of the time. 
 
You will do it in 1/1000 of the time because the machines are already
built.
 
If dropped on an planet with the necessary resources but without tools or other
created materials, which could you build first, a digital computer that would
solve the shortest-path problem for a graph of 1000 vertices and 100,000 edges
or an analog computer? 
 
Let's also note that we do not need to build special-purpose hardware for
analog models any more than we need to build special-purpose hardware for
digital models.  Even the single model of one graph can be used to find
the shortest paths between ALL pairs of nodes in the graph without being
torn down and rebuilt.  David desJardins points out it would take linear
time for the model to fall in place when lifted by a node.  However, this
linear time solves 999 shortest-path problems for a 1000-node graph.
Additionally, we can imagine a constructed machine to turn out graphs and
other analog models.
 
So why don't we turn our focus to the elegance mathematics of the models
instead of their physics and engineering?  I know I enjoyed the information
in _Scientific American_, and I'd like to hear more.
 
 
				-- edp
				Eric Postpischil
				"Always mount a scratch monkey."

bradley@think.COM (Bradley Kuszmaul) (10/20/86)

In article <3529@columbia.UUCP> zdenek@heathcliff.columbia.edu.UUCP (Zdenek Radouch) writes:
>In article <27@cartan.Berkeley.EDU> desj@brahms (David desJardins) writes:
>>In article <3516@columbia.UUCP> zdenek@heathcliff.columbia.edu.UUCP (Zdenek Radouch) writes:
>>>
>>>Sorry, you are wrong. To solve the problem you just lift the model!
>>
>>   Sorry, you are doubly wrong.
>
>You are very confused about what it takes to find the shortest path between
>two vertices in a graph. [...]

I hate it when people say "You are very confused".  Is it not enough
to be merely confused?  Besides, I don't think he is confused.

> In order to implement the function you have to use
>primitive operation COMPARE.
>
>It turns out, that relational operators in mathematics are always applied
>to TWO operands, hence your additional time if n>2.

One model of computation is that you have to use the primitive
operation COMPARE, but there are certainly other models.  For example,
I I had a model of computation which gave me MIN of an arbitrary set
as a primitive operation, I might be able  to implement this algorithm
in truly constant time.  However, I don't have an implementation for
such a model, so I don't spend too much time thinking about it.

  On the other hand, I don't have an implementation for a model in
which a binary comparision takes unit time either.  Typically, as the
size of the problem grows, the size of the numbers grow also, and the
time it takes to do a comparision of two K bit numbers is at least
(log K).

>His model together with his eyes happen to perform FULL PARALLEL compare,
>therefore time = 1.
>
>As I said, his method might not be practical, but it's elegant and
>  much faster.

If it is not practical, then how can it be faster.  I am never happy
with very fast algorithms that produce the wrong answer when I program
them myself.  I don't see why I should be more forgiving of a program
which someone else builds out of string.

>
>>   2) It takes linear time for the model to fall into place and indicate the
>>solution after you lift it.

>Only when you consider Grurmstipth's force acting on the beads. It's been
>known for years, at least in physics, that the time, n falling beads take to
>fall, is INDEPENDENT of n.

I am not sure why you are introducing Grurmstipth at this point.  Did
you intend the above paragraph to be a :-)?  If so, pardon me for
being serious...   It is going to take at least linear time (linear in
the answer rather than linear in the input) for the model to fall into
place, since some bead must fall at least the distance from the crane
(hoist, hand... etc) to where it lands and is measured.  Even if the
silly bead had already fallen to its position, you could argue that it
will take linear time just to measure it (e.g. by bouncing a laser of
the bead in question, and measuring the time it takes for the laser
beam to return).

Incidently, several people have complained that some of my other
criticisms of this algorithm were wrong.

For example, as far as innaccuracy goes, let me state that I was
allowing the string to be arbitrarily precise, but that I did assume
that the string has some finite width.  In that case, as the graph
gets very large, the width of the hanging string (as a collection of
strings) becomes important.  It is conceivable that the shortest
string would not be allowed to fall straight down, but that it would
have to go around the lump in the center of the bundle of strings, and
that therefore some other string might "think" it was the shortest
string, giving a wrong answer.  If you assume the string as zero
width, then I can't argue much more than that at least I know how to
build a digital computer, but not a zero width string.

________________

Some people have objected to the fact that I use more memory in a
digital computer as the size of the problem grows, and that I replaced
the memory chips with processor chips at some point in my argument.  I
think that my position here needs to be clarified.

As a string advocate, you can take one of four positions,
characterized by the cross product of two binary choices:

  (1)  The string model is charged for the amount of string it uses
       up.
  (2)  The string model is not charged for the string it uses.

and indendently
  (A)  The digital computer is charged for the memory it uses, or
  (B)  The digital computer is not charged for its memory.

Now as an advocate of the string model, I don't suppose you will
support the combination of choices (B) and (1), since then the digital
computer comes out cheaper.

If you choose (2) and (B) then both machines can solve an arbitrary
problem for unit hardware.  The digital computer takes around linear
time in the inputs, and string computer takes an amount of time which
we can't seem to agree upon.  (Linear in something or the other?)

If you choose (1) and (A) then both machines have a hardware cost
which is proportional to the size of the input problem.

If you choose (2) and (A) then you are being unfair, but since you are
charging me hardware costs which are linear in the size of the problem
I might as well use the hardware of my choice rather than the hardware
of your choice, and replace the memory chips with processor chips.
This has the same cost (it is linear in the input) but now I can at
least exploit parallelism and do about as well as your parallel
machine.  In fact, I could have taken this tact if you had chosen (1)
an (A) too.

I think the string and the memory are very similar.  Either you charge
for both or you charge for neither.  I think it is more realistic to
charge for both...

________________

Some people have argued that the advantage of the string algorithm is
that it is easier to understand.  My algorithm took "eighteen lines of
code" and some more explanation.  (BTW The explanation should not be
counted as part of the algorithm.  The explanation was analysis of the
algorithm's performance, which noone has argued about.  No analysis of
the string machine's performance can make the same claim about being
accepted by everyone).  (Even the algorithm itself only took eighteen
lines to describe because I was being overly verbose.  The
corresponding parallel algorith, when written in Connection Machine
Lisp (see various sundry and published and unpublished papers by Guy
Steele and Danny Hillis, mumble mumble cite cite mumble mumble) is
only four lines long, and that description included parallelism.

More fundamentally, I have never considered "the complexity of
algorithms" to include the complexity of "how hard it is to understand
the algorithm".  An algorithm only has to be written and analyzed
once, and that portion of the "complexity" is constant, independant of
the size of the input.

________________

I also have the belief, with no proof, that there are no analog
machines which will do better than digital machines, because whatever
analog machine you build, I can build a digital machine with about the
same hardware cost, and the digital machine will be able to simulate
the analog machines behavior with only a small decrease in performance
(e.g. typically a polylogarithmic decrease in performance).  If you
accept that handwaving argument, then it would be very unlikely that
there is an analog computer using only a polynomial amount of hardware
which can solve an NP-hard problem in polynomial time.

>>   -- David desJardins
>zdenek

 -Bradley  (think!bradley or bradley@think.com)

desj@brahms (David desJardins) (10/21/86)

In article <6033@decwrl.DEC.COM> postpischil@castor.dec.com writes:
>David desJardins makes much ado about the time needed to construct an
>analog model but never speaks of the time needed to construct the hardware
>required for a digital model.  He speaks of spending many days building
>"special-purpose hardware to solve the one particular problem", but we
>only need to consider how many days it would require to build the
>special-purpose digital hardare to solve the same particular problem to
>see the true situation.

   There is no question of "special-purpose digital hardware" here.  The
digital computer is only a refinement of the simple notion of a tool to aid
in mechanical computation.  I could also do the calculation with pencil and
paper (or an abacus!) in far less time than it would take to build the model
and with 1000x the accuracy.
   The issue is not one of the speed of digital vs. analog hardware.  The
simple fact is that even graphical problems which have elegant analog methods
for solution are inherently much more amenable to algorithmic, arithmetical
computation.

>> If you really think this is superior, let's have a race.  I will spend no
>> more for the computer hardware than you spend on strings and beads, and
>> achieve 1000x the accuracy in 1/1000 of the time. 
> 
>You will do it in 1/1000 of the time because the machines are already
>built.
> 
>If dropped on an planet with the necessary resources but without tools or
>other created materials, which could you build first, a digital computer
>that would solve the shortest-path problem for a graph of 1000 vertices
>and 100,000 edges or an analog computer? 

   I would build an abacus.  I could most certainly achieve 1000x the
accuracy in 1/1000 of the time, even with no existing tools.  And at the
end I would have an immensely useful tool for arithmetic calculation, and
you would have a worthless net of strings and beads.

>So why don't we turn our focus to the elegance mathematics of the models
>instead of their physics and engineering?

   Because the mathematics is trivial?  (Which, BTW, is why I am going to
say no more about this in {net|sci}.math, unless hopelessly provoked.)

   -- David desJardins

zdenek@heathcliff.columbia.edu (Zdenek Radouch) (10/21/86)

In article <6528@think.COM> bradley@godot.think.com.UUCP (Bradley Kuszmaul) writes:
[me]  As I said, his method might not be practical, but it's elegant and
    much faster.

>
>If it is not practical, then how can it be faster.

There is no relationship between "practical" and "fast". One of the synonyms
of practical is USEFUL.
You have climbed a tree and want to get back to the ground level. You can:

	1. Jump.
	2. Climb down the tree.

Method (1) is always faster than (2), but it might not be always practical.

>I also have the belief, with no proof, that there are no analog
>machines which will do better than digital machines, because whatever
>analog machine you build, I can build a digital machine with about the
>same hardware cost, and the digital machine will be able to simulate
>the analog machines behavior ......

	Don't rely on your feelings or believes. In this case you are orders of
magnitude off. Your environment is ANALOG. All the machines that deal with it
have to be at least partially (front and back end) analog. If by "no analog
machine will do better than digital machine" you mean only machines that
implement mathematics, you are still off. Yes, digital computers are easy to
build because of the modularity of digital electronics, they're also flexible
(as long as the software is flexible), but there's a problem.
It is easy to say "Hey, I've got a workstation on my desk; I can simulate
EVERYTHING." You probably can. But if you say "Here's the problem, what's the
best machine to solve it?" that's another story.

	My point is that the mathematics is a tool that was designed by us
in order to simplify our lifes when dealing with the environment. It wasn't
here before we got here. The analog machines were!

The sine wave wasn't just a curve somebody liked. It is derived from a circle,
it describes a harmonic motion. To say that the digital computer is the best
machine for playing with sine functions, is crude simplification.
The time to discharge a capacitor C through resistor R from V1 to V2 is
t = RC ln(V1/V2). You think you can build a digital hardware that's simpler?

zdenek

 -------------------------------------------------------------------------
 Men are four:
 He who knows and knows that he knows, he is wise - follow him;
 He who knows and knows not that he knows, he is asleep - wake him;
 He who knows not and knows that he knows not, he is simple - teach him;
 He who knows not and knows not that he knows not, he is a fool - shun him!

 zdenek@CS.COLUMBIA.EDU  or 	...!seismo!columbia!cs!zdenek
 Zdenek Radouch, 457 Computer Science, Columbia University,
 500 West 120th St., New York, NY 10027

bradley@think.COM (Bradley Kuszmaul) (10/21/86)

In article <3542@columbia.UUCP> zdenek@heathcliff.columbia.edu.UUCP (Zdenek Radouch) writes:
>In article <6528@think.COM> bradley@godot.think.com.UUCP (Bradley Kuszmaul) writes:
>[me]  As I said, his method might not be practical, but it's elegant and
>    much faster.
>
>>
>>If it is not practical, then how can it be faster.
>
>There is no relationship between "practical" and "fast". One of the synonyms
>of practical is USEFUL.
>You have climbed a tree and want to get back to the ground level. You can:
>	1. Jump.
>	2. Climb down the tree.
>Method (1) is always faster than (2), but it might not be always practical.

Like I said, I never feel very good about algorithms which run very
fast but give the wrong answer (in this case, broken limbs).  In this
case,  I believe your analogy does nothing to deny my point, since my
point was that if an algorithm is not practical, how can it be fast.
There are several ways for an algorithm to be impractical:
 (1) Too expensive to implement (in which case it never runs and is
     very slow indeed),
 (2) Gives incorrect answers (in which case who cares how fast it is),
 (3) others?....

>>I also have the belief, with no proof, that there are no analog
>>machines which will do better than digital machines, because whatever
>>analog machine you build, I can build a digital machine with about the
>>same hardware cost, and the digital machine will be able to simulate
>>the analog machines behavior ......
>
>	Don't rely on your feelings or believes. In this case you are orders of
>magnitude off. Your environment is ANALOG. All the machines that deal with it
>have to be at least partially (front and back end) analog.

I am not sure what the point of demonstrating that "everything is
analog" is.  Showing that everything is analog merely proves that you
can't do better than the best analog machine.  It does not prove that
you CAN do better than a digital machine.

There may be some question as to what I mean by "can't do better".  I
admitted that I might have to pay a polylogarithmic penalty to
simulate your arbitrary analog machine with my general purpose digital
machine, and if you think things like constant factors are important
then the digital machine will sometimes lose (constant factors, are of
course, just a special case of polylogarithmic penalties).  Sure, as
an engineer, there are always special cases where an analog machine
can do better, but those special cases in the real world seem to be
becoming more scarce.

As an example, consider the clothes washing machine.    The machine
that was in my parents house when I was a child had an analog
controller, and it worked for a long time.  (Lasted twenty years).
Today, most washing machines are controlled by a microprocessor.  Now,
admittedly, it would be pretty silly to go to all the trouble of
inventing transistors, and building the VLSI technoglogy and factories
to build microprocessors just to control washing machines, but once we
have the microprocessors, it becomes cheaper to use them than to build
an analog control system.  (In fact the microprocessors might do a
better job or a worse job, depnding on the software.  I know that in
my  audio turntable, the extra smarts of a microprocessor is great:
It won't set the read head (I guess you'd call that the cartridge?)
unless it has some evidence that a record is actually on the platter,
the rotational speed is controlled by the processor and the specs are
something like 0.001% variation in the rotational speed (which at
least impressed my dad, who supposedly understands the technology for
building turntables with analog controllers).  Of course, these
examples are really getting of the track, since they are not really
"computing" per se, but are performing a control task.

It is true that it is hard to simulate a capacitor for as cheap as a
capictor costs, but it is much cheaper to build a simulator (in
software) to simulate arbitrary circuits, and then feed a description
of the circuit to the computer than it is to actually build the thing
out of hardware to simulate it.  Of course, the hardware may still
turn out to be cheaper and faster, especially if you are going to mass
produce it, and use it for something in the critical path of a very
high speed computation (e.g. for signal processing at 1GHz).

>It is easy to say "Hey, I've got a workstation on my desk; I can simulate
>EVERYTHING." You probably can. But if you say "Here's the problem, what's the
>best machine to solve it?" that's another story.

I never did really understand how to use general purpose computers
anyway.  Why would anyone want them?  All I need is text editing. :-)

>	My point is that the mathematics is a tool that was designed by us
>in order to simplify our lifes when dealing with the environment. It wasn't
>here before we got here. The analog machines were!

I'm really getting random here:  There are certain philosphies of
mathematics which state that the math was here before us.  The job of
the mathematician is to discover it.

Speaking of random... What was the original posting?
 -Brad

m3h@psuecla.BITNET (10/22/86)

#