[comp.arch] 32->64 bit Killer Micros

wailes@en.ecn.purdue.edu (Tom S Wailes) (06/23/91)

Assume you are designing a massively parallel computer system made of
many commodity microcomputers.  To make this interesting, assume that
partitioning can be made to allow differing classes of users to coexist on
one machine.  For example, several "vi" tasks could be resident on a single
processor, while many (128) processors could be attacking a electron
transport simulation problem.  What word size would be the best for use
and why?

     A massively parallel machine would probably be used for scientific
computations and many would say you must have a machine with 64 bit
capabilties built in.  However, all of the general purpose programming and
low level processing tasks could easily be handled by present 32 bit
micros, and clearly the 32-bit market will have the most competition and
the most competitive price for some years to come.  You must make a design
decision soon.  What papers have studied this?  Who has numbers from
which to base this decision?

                                                  Tom
                                                  wailes@ecn.purdue

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (06/24/91)

In article <1991Jun23.012644.12449@en.ecn.purdue.edu> wailes@en.ecn.purdue.edu (Tom S Wailes) writes:

|                                What word size would be the best for use
| and why?

  Since the answer all depends on the definition of "best" I guess we're
going to waste several weeks talking at cross purposes. Why not restate
the question giving some hint what you're trying to maximize.

  You can go for max MIPS, MIPS/$ (with or without software cost
included), lowest power drain, or best paint job. You can specify
writing your own o/s, using unix, using something else, etc.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  GE Corp R&D Center, Information Systems Operation, tech support group
  Moderator comp.binaries.ibm.pc and 386-users digest.
         "I admit I am predjudiced, I can't stand bigots." -me

wailes@en.ecn.purdue.edu (Tom S Wailes) (06/26/91)

In article <3457@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
> In article <1991Jun23.012644.12449@en.ecn.purdue.edu> wailes@en.ecn.purdue.edu (Tom S Wailes) writes:
> 
> |                                What word size would be the best for use
> | and why?
> 
>   Since the answer all depends on the definition of "best" I guess we're
> going to waste several weeks talking at cross purposes. Why not restate
> the question giving some hint what you're trying to maximize.
> 
>   You can go for max MIPS, MIPS/$ (with or without software cost
> included), lowest power drain, or best paint job. You can specify
> writing your own o/s, using unix, using something else, etc.
> -- 
> bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)

   The best architecture would be one that maximized Specs, or some other
measure of performance related to actual code.  One problem I have is
defining this.  What code runs on a multiprocessor massively parallel
machine at an actual company or research center?  Do actual users run dusty
decks of Fortran originally written for vector type machines, or do they
experiment by developing entirely new approaches taking into account that
the guts are hundreds of individual 32-bit micros.  Do the present
operating systems limit our experience such that present use would be much,
much different than future use?  The end goal of my research is the
development of the hardware architecture of future "Killer Micros."  I
confess that this goal is elusive, because the micro hosts are constantly
changing and improving , but I am trying to attack the
shortest job first.  How big is big enough.  It seems 32-bits is big enough,
but what do I know, I'm not a user.

                                             Tom
                                             wailes@ecn.purdue.edu

P.S.  On a related topic, what does one do with virtual memory on a Killer
Micro?  Do you distribute memory among the processors or do you create a
large banked shared memory?  Caching and virtual memory support for
the uniprocessor environment will most likely be intrinsic on a 32-bit
micro.  How does one use this built in feature on a massively parallel
microprocessor based architecture?  A shared memory would offer better
utilization in my opinion, but then it would not be local.  Who has
experimented with this?