[comp.parallel] Breakthrough at Sandia?

rgr@m10ux.UUCP (Duke Robillard) (03/25/88)

I just read an article about some achievement at Sandia in regards
to getting close to the theoretical maximum improvement that a 
multi-processor has over a single processor. (ya know, really running
1K times faster on a 1K processor machine).  Unfortunately, it was
in EE times, so the technical details were a little light (:-)  
The results are to published in a "journal of the Society for 
Industrial and Applied Mathematics," according to this article, and
the guys responsible got a prize at IEEE's compcon last month.  

Does anyone know anything more about this?
-- 
+                                
|       Duke Robillard
|       AT&T Bell Labs           m10ux!rgr@ihnp4.UUCP                 

eugene@pioneer.arpa (Eugene N. Miya) (03/25/88)

Results were presented at IEEE COMPCON last month.  I just joined the
audience when Alan Karp polled the audience (I saw the plaque Alan made).
The code was run on a 1024 processor NCUBE.  The paper is in the conference
proceedings:

%A John L. Gustafson
%A Gary R. Montry
%T Programming and Performance on a Cube-Connected Architecture
%J Compcon '88
%I IEEE
%C San Francisco, CA.
%D February -March, 1988
%P 97-100
%K NCUBE hypercubes, Ensemble Paradigm, Language, Debugging,
Communications, Load Balance

I have problems with the way Alan polled the audience to achieve consensus
(He should have seriously asked, "How many don't care or don't know whether
they regard this as a "real application.")  The problem was scaled up from
the one processor case.

Grrrrr...

>From the Rock of Ages Home for Retired Hackers:

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {uunet,hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

lisper-bjorn@cs.yale.edu (Bjorn Lisper) (03/25/88)

[ See Eugene Miya's preceding article. Steve.]
[ Article will be published in whichever journal Bill Gear edits - The
  release in the local paper included an interview with him.]

In article <1197@hubcap.UUCP> rgr@m10ux.UUCP (Duke Robillard) writes:
:I just read an article about some achievement at Sandia in regards
:to getting close to the theoretical maximum improvement that a 
:multi-processor has over a single processor. (ya know, really running
:1K times faster on a 1K processor machine).  Unfortunately, it was
:in EE times, so the technical details were a little light (:-)  
:The results are to published in a "journal of the Society for 
:Industrial and Applied Mathematics," according to this article, and
:the guys responsible got a prize at IEEE's compcon last month.  
:
:Does anyone know anything more about this?

Hmm, I wonder how he can achieve this for inherently sequential problems. I
think there are some hooks somewhere.

Do you know in which of the SIAM journals this is to be published? There are
several. And could you give an exact reference to the article you saw? I
would like to check this out.

Bjorn Lisper

grunwald@uunet.uu.net (03/28/88)

Basically, they took problems well suited to parallelism (wave equation,
finite difference methods for beam stress and one other one) and did good
implementations.

For a fixed problem size, they got from 550 -> 650 speedup. If you fix the
problem size *per processor* (i.e. the problem gets bigger as you throw
more processesors at it), the ``scaled speedup'' peaks at 1020.

This is for total execution time -- loading, running & unloaded.

The scaled speedup measure is justified because many problems sizes are
simply constrained by time, not by the problem. In those cases, you just
want the thing done within a reasonable time. If you can run a larger version
of the problem in that time, you're happy about it.

The 550 -> 650 speedup indicates that the serial code was about .15%, which
is much better than Amdahl conjectured to be possible.

Interestingly, Gene Amdahl gave a talk here right after this came out -- from
his comments, I don't think that he had read the paper. He conjectured that
optimistic parallelism to be expected from hypercube systems for a general
job mix would be about 12% (i.e. 126 of 1024) for a fixed problem size.

Obviously, you don't use hypercubes for a general job mix. Or at least not
the entire hypercube.

Dirk Grunwald
Univ. of Illinois
grunwald@m.cs.uiuc.edu

lisper-bjorn@YALE-BULLDOG.ARPA (Bjorn Lisper) (03/30/88)

In article <1197@hubcap.UUCP> rgr@m10ux.UUCP (Duke Robillard) writes:
:I just read an article about some achievement at Sandia in regards
:to getting close to the theoretical maximum improvement that a 
:multi-processor has over a single processor. (ya know, really running
:1K times faster on a 1K processor machine).  Unfortunately, it was
:in EE times, so the technical details were a little light (:-)  
:The results are to published in a "journal of the Society for 
:Industrial and Applied Mathematics," according to this article, and
:the guys responsible got a prize at IEEE's compcon last month.  
:
:Does anyone know anything more about this?

Hmm, I wonder how he can achieve this for inherently sequential problems. I
think there are some hooks somewhere.

Do you know in which of the SIAM journals this is to be published? There are
several. And could you give an exact reference to the article you saw? I
would like to check this out.

Bjorn Lisper