[comp.parallel] dataflow and data structures

roy%bonnie.ics.uci.edu@ORION.CF.UCI.EDU (John Roy) (06/14/88)

 In looking into dataflow architectures and their data structures,
I've run into an interesting question that some of you may (hopefully)
have some ideas on.

       i)  For the purposes of parallelism it is desirable to
distribute the independent accesses to a giv n data structure, but
this only increases the access timeto  the data structure since the
access must be over the network, (given a single centralized data
structure). How do we balance these conflicting desires?
        ii) any attempt to keep complete copies of each data structure
in each PE is going to create massive overhead and a database
consistancy problem.  
        iii) if we attemp to distribute the data structure (keeping
the local parts in local memory) again the database consistancy
problem occurs and extensive overhead will be created.

These problems seem to be independent of dataflow, and would occur in
*any* distributed system having shared data structures.  Any ideas on
how to handle this??? 

It also looks like some type of code analyzer to determine proper
distribution of the data structure is necessary.

John M.A. Roy. UC Irvine/Fail-Safe Technology
ICS Dept., Univ. Calif., Irvine CA 92714
Internet: roy@ics.uci.edu  CompuServe: 76167,2527