[net.arch] General parallelism

rb@cci632.UUCP (07/11/86)

In article <2219@umcp-cs.UUCP> mark@maryland.UUCP (Mark Weiser) writes:
>Sigh.  Is there something new here?  ANYONE can build a "truly
>parallel machine" if they have a truly parallel application like
>image processing.  I admit such machines are very very useful if
>you know "exactly where <you> want each bit to go", and if I
>were doing image processing I would buy one, but this hardly
>tells me anything about parallel processing in general.
>-mark
>-- 
>Spoken: Mark Weiser 	ARPA:	mark@maryland	Phone: +1-301-454-7817
>CSNet:	mark@umcp-cs 	UUCP:	{seismo,allegra}!umcp-cs!mark
>USPS: Computer Science Dept., University of Maryland, College Park, MD 20742

Since you simply want some "general information", here are some very simple
illustrations as to how parallelism can be done.

      a1   a2
      |\  /|
      | \/ |
      | /\ |
      |/  \|
      b1   b2

Assume that each processor is capable of multi-tasking, and that the
a's request services from the b's.  Assume also that the b's will accept
messages only when resources are available to service requests.
Assume also that the a's are functionally identical, and the b's
are functionally identical.

Now, a1 can request "calls" such as search inquiries to a data base,
or opening a file, or whatever.  In addition, a's can issue requests such as
"(grep expr1 b!file1 > a!file1 & ; grep expr2 b!file2 >a!file2 &);
join a!file1 a!file2"

Now since b1 and b2 have access to the same identical data, b1 can search
at the same time b2 is searching, a1 would normally be waiting, however
because it is multi-tasking, it can be preparing another parallel call.

Normally b1 and b2 would have to wait for disk drives, but because a2
can also be issuing another call, and the b's can also multi-task, the
the performance actually begins to improve as load is increased.

now as the "a" files get filled, because the a's are both capable of
accessing the same (logically or physically) files, either could do the
join, and either could do the display processing.

Now, suppose b1 was "too busy" to do a job?  He could reject the command,
or simply not poll the port, and b2 would pick up the load.

How the paths are arranged physically is irrelevant.  The basic point is
that logically this relationship exists.

The individual nodes could be systems, drives, terminals, or processors.
The system could be expanded horizontally and/or vertically.  The A's
could be "laddered systems" such as dual port ethernet gateways.

The main principles are quite simple.  So long as both the "A's" and the "B's"
are logically identical, and the "A's" are not specific about which "B's"
they use, parallel processing will naturally occur whenever possible.  So long
as all nodes are multi-tasking, even multiple sequential processes will
trigger multi-processing.

Sounds simple, right?  It is!

What is difficult is keeping multiple "levels" "logically identical".