[comp.sys.transputer] Dynamic reconfiguration

steph@sonia.ese-metz.FR (Stephane Vialle) (03/14/91)

In <27405@uflorida.cis.ufl.EDU>  Madhan writes :

> Can someone tell me the exact definition of a dynamically
> reconfigurable machine.  I have 2 definitions:
> 
>  --  A machine, in which at any instant of time we could use a 
>     configuration of processors which is a subset of the configuration        >     of the the whole set of available processors in the system. 
>    (Or is this static reconfiguration as we do not really change the
>    underlying configuration)?
>
>  --  A machine in which the set of processors are reconfigured some
>      time during the execution of some program. That is, in this case, 
>      the underlying configuration of processors is changed in the 
>      middle of some program execution. (Is this possible at all) ?
>
> Which of the above definitons are correct ? 
>
> I would like to know if the Parsytec's transputer board with 16
> transpters supports dynamic reconfiguration as defined in my second
> definition above.



     I think the correct definition of a dynamically reconfigurable machine 
is the second one. It's a machine in which the connections between the 
processors (the links) are redistributed during the execution of some programs.
It means that the topology of the underlying network of processors is going to
change during the execution. For example you go from a ring to an hypercube
of processors during the same program and the utilisation of the same processors.

     The T-NODE machines have this kind of property. They are composed of a 
set of transputers and a switching network which changes during the program 
in order to meet the dynamic needs of transputer communications. But it seems 
very difficult to use this property and to program some T-NODE machines.

     The Parsytec's transputer board on which we work has a switching network
which is controlled by a small transputer : one T212, and a jumper of the board 
: the JP4 one, permits to communicate with this T212 by the frontal computer,
or by an external computer, or by one of the T800 transputer of the board.
We have never tried to make dynamic reconfiguration, but perhaps it's possible
to make if we control the T212 with a T800 of the network.

     Anyway it's very difficult to program parallel machines with dynamic
topologies.

Steph.


-----------------------------------------------------------------------------
Stephane  Vialle                                E-mail: steph@ese-metz.fr
Supelec Metz (ESE)                              Phone:  +33 87 74 99 38
Computer Science Departement                    Fax:    +33 87 76 95 49
2 rue Ed. Belin                                
F-57078 Metz Cedex 3
FRANCE
-----------------------------------------------------------------------------

andyr@inmos.com (Andy Rabagliati) (04/19/91)

An idea I had for dynamic reconfiguration but never tried goes as
follows :-

Example Configuration :-

7 Transputers (compute TPs) with all 4 links connected to a C004.

A T2 connected to both a data link on the C004 and the control link.

One link (the listener link) on each compute TP is serviced by a process
that knows the current communication needs of the compute TP.

The T2 polls each listener link, to determine which C004 connections
to make. It does this by connecting its data link in turn to listener
links. It can trivially be expanded to multiple C004s and large networks.

Transactions might go as follows :-

T2:- (asking TP 1) Any connections to be changed ?
TP1:- No thanks.
T2:- (asking TP 2) Any connections to be changed ?
TP2:- Please connect me to TP 7
T2:- (Makes connection) Please use link 2.
T2:- (asking TP 3) Any connections to be changed ?
TP3:- No thanks.

  . . .

T2:- (asking TP 2) Any connections to be changed ?
TP2:- Please disconnect TP7, connect to TP6
T2:- TP7 disconnected, TP6 busy, please hold.
  . . .
T2:- (asking TP 2) Any connections to be changed ?
TP2:- Please connect TP6
T2:- (makes connection) Please use link 3.
  . . .

etc.

Cheers,  Andy Rabagliati    EMAIL:- rabagliatia@isnet.inmos.COM

adm@computer-science.manchester.ac.uk (Alan Murta) (04/22/91)

Here at the University of Manchester, England, we have been using dynamically
reconfigured transputer links for some time now. We have a home-built 64
transputer machine, known as the "T-Rack". Two links from each transputer
connect up to a large 128 x 128 C004 crossbar network (the extra large size is
to allow external connections). The C004s are controlled by a separate switch
control transputer.

An 8-bit bus allows bidirectional communication between the 64 application
transputers and the switch control transputer. Application transputers can
ask (via the bus) for link connections to be set up or relinquished at run
time. The switch controller installs the required links, and sends back an
acknowledgement to the requesting transputer(s). The T-Rack was not designed
with dynamic reconfiguration in mind, so "link throughput" performance is not
as good as it could be.

Early work [1, 2, 3] featured a sender-node request protocol, in which the
message sending transputer would be responsible for the request / release of
switched link connections. This protocol has the disadvantage that receiver
nodes must be organised so as to have a message receiver process active at all
times, to accept any incoming messages arriving from remote senders.

In occam, the ownership of a communication channel is shared by the processes
it connects. Synchronous channel communication requires that both the message
sender and receiver must be ready to communicate before any data is sent down
the channel.

A dynamic link reconfiguration protocol should reflect this - both the
sending node and the receiving node should agree that they require a link over
which to communicate. More recent work here at Manchester [4] has featured a
new sender-receiver-node link request protocol, in which both transputers must
register their interest in using a dynamic link.

The use of this second request protocol has allowed the development of
elegantly coded distributed applications, free from the clutter of message
forwarding / channel multiplexing processes, and with fully synchronous
point-to-point communications between any pair of processes anywhere in the
network. Parallel efficiency is low when the levels of communication
are much higher than the levels of computation. Increasing the compute to
communicate ratio can alleviate the link request overheads, however.

References:
-----------
  [1]	P.Jones, A. Murta, "Support for Occam Channels via Dynamic Switching
	in Multi-Transputer Machines", OUG 9 Proceedings, 1988, IOS Amsterdam.
  [2]	P.Jones, A. Murta, "Practical Experience of Run-Time Link
	Reconfiguration in a Multi-Transputer Machine", Concurrency: Practice
	and Experience, 1, 2, December 1989, John Wiley.
  [3]	P. Jones, A. Murta, "The Implementation of a Run-Time Link Switching
	Environment for Multi-Transputer Machines", NATUG 2 Proceedings, 1989.
  [4]	A. Murta, "Support for Transputer Based Program Development via
	Run-Time Link Reconfiguration", Ph.D. Thesis, University of
	Manchester, (under preparation - due late 1991).

---------------------------------------------------------------------------
Alan Murta   Department of Computer Science, University of Manchester,
Lecturer     Oxford Road, Manchester, M13 9PL, U.K.  Tel: (+44) 61-275-6259
             Mail: adm@uk.ac.man.cs     adm%cs.man.ac.uk@nsfnet-relay.ac.uk
---------------------------------------------------------------------------

adm@cs.man.ac.uk (Alan Murta) (04/22/91)

Here at the University of Manchester, England, we have been using dynamically
reconfigured transputer links for some time now. We have a home-built 64
transputer machine, known as the T-Rack. Two links from each transputer
connect up to a large 128 x 128 C004 crossbar network (the extra large size is
to allow external connections). The C004s are controlled by a separate switch
control transputer.

An 8-bit bus allows bidirectional communication between the 64 application
transputers and the switch control transputer. Application transputers can
ask (via the bus) for link connections to be set up or relinquished at run
time. The switch controller installs the required links, and sends back an
acknowledgement to the requesting transputer(s). The T-Rack was not designed
with dynamic reconfiguration in mind, so "link throughput" performance is not
as good as it could be.

Early work [1, 2, 3] featured a sender-node request protocol, in which the
message sending transputer would be responsible for the request / release of
switched link connections. This protocol has the disadvantage that receiver
nodes must be organised so as to have a message receiver process active at all
times, to accept any incoming messages arriving from remote senders.

In occam, the ownership of a communication channel is shared by the processes
it connects. Synchronous channel communication requires that both the message
sender and receiver must be ready to communicate before any data is sent down
the channel.

A dynamic link reconfiguration protocol should reflect this - both the
sending node and the receiving node should agree that they require a link over
which to communicate. More recent work here at Manchester [4] has featured a
new sender-receiver-node link request protocol, in which both transputers must
register their interest in using a dynamic link.

The use of this second request protocol has allowed the development of
elegantly coded distributed applications, free from the clutter of message
forwarding / channel multiplexing processes, and with fully synchronous
point-to-point communications between any pair of processes anywhere in the
network. Parallel efficiency is low when the levels of communication
are much higher than the levels of computation. Increasing the compute to
communicate ratio can alleviate the link request overheads, however.

References:
-----------
  [1]	P.Jones, A. Murta, "Support for Occam Channels via Dynamic Switching
	in Multi-Transputer Machines", OUG 9 Proceedings, 1988, IOS Amsterdam.
  [2]	P.Jones, A. Murta, "Practical Experience of Run-Time Link
	Reconfiguration in a Multi-Transputer Machine", Concurrency: Practice
	and Experience, 1, 2, December 1989, John Wiley.
  [3]	P. Jones, A. Murta, "The Implementation of a Run-Time Link Switching
	Environment for Multi-Transputer Machines", NATUG 2 Proceedings, 1989.
  [4]	A. Murta, "Support for Transputer Based Program Development via
	Run-Time Link Reconfiguration", Ph.D. Thesis, University of
	Manchester, (under preparation - due late 1991).

---------------------------------------------------------------------------
Alan Murta   Department of Computer Science, University of Manchester,
Lecturer     Oxford Road, Manchester, M13 9PL, U.K.  Tel: (+44) 61-275-6259
             Mail: adm@uk.ac.man.cs     adm%cs.man.ac.uk@nsfnet-relay.ac.uk
---------------------------------------------------------------------------

piet@cs.ruu.nl (Piet van Oostrum) (04/24/91)

We have a 16 transputer system and one C004 link switch. As the C004 has
only 32 in/outputs, the other 32 links must be put into a fixed
configuration. Any suggestion about what would be a good one? We want to be
able to make the usual networks: grid, ring, torus, tree, 4d cube etc.
Anyway much less than 32! different ones. 

We don't find it very useful to have more than one connection between two
transputers, or to connect a transputer to itself. The numbering of the
links is also not important. And all connections just bidirectional.
-- 
Piet* van Oostrum, Dept of Computer Science, Utrecht University,
Padualaan 14, P.O. Box 80.089, 3508 TB Utrecht, The Netherlands.
Telephone: +31 30 531806   Uucp:   uunet!mcsun!ruuinf!piet
Telefax:   +31 30 513791   Internet:  piet@cs.ruu.nl   (*`Pete')