[comp.sys.transputer] Busy memory

J.Wexler@edinburgh.ac.uk (10/12/89)

> Performance with programmability is the target.

Sounds good so far.

> Surely the point is to
> keep the memory as busy as possible?  Cycles-per-instruction is not the
> issue, but accesses-per-memory-location.

I don't see (a) the point that you are making, nor (b) how it relates to
your opening remark (above).  I have to try to deduce what you mean from your
next remark:

> Microprocessors in general
> are too big:  A 10 mip processor with 10 Meg of RAM accesses each
> location only once per second.  What a waste of RAM!

The whole point of memory is that it remembers things over an indefinite period.
You use it precisely because you don't want to be working on all your data all
the time - some bits of data (most of it) lie unused for long periods.  So it
doesn't look at all obvious that you should aim to keep all your memory as BUSY
as possible.  I would rather keep it as FULL as possible of useful data,
organised so that individual items can be rapidly located and fetched when they
are needed. If you have some purpose in mind for which BUSYNESS rather than
ACCESSIBILITY would be the primary virtue, then I suggest that conventional
memory is not the appropriate device; maybe you want some more active device
like associative memory, or a DAP.

But [you may say] the reason why data lies in memory for long periods is that I
am talking about sequential programs.  Why should a concurrent program leave
data lying in memory for long periods?  Once the data is available, some thread
of the program ought to get to work to process it straight away.

That line of argument will indeed lead you to favour a fine-grained style of
parallelism, but it doesn't convince me, for two reasons.

1: If it could be made to work, there would be no need to use "memory" at all;
when an item of data became available, it could be stuffed straight into
whatever mechanism was going to "process it straight away";

but

2: it would only work on the (totally unrealistic) assumption that ALL the data
required for each stage of the computation would become available
simultaneously.

So I believe that, regardless of the grain size or the paradigm of concurrency,
memory IS needed for LONG-TERM storage of data; and that suggests that BUSYNESS
is not a relevant measure of effective memory usage.

What IS a real issue is the efficient use of the PATHWAYS to memory - the bus
or whatever; but that's quite a separate question, and fairly well understood.