[comp.protocols.tcp-ip] Reliable connectionless

muts@fysaj.fys.ruu.nl (Peter Mutsaers /100000) (09/07/90)

For a project on distributed computational physics I am
working on a small library to allow users easy access to 
N processes on different computers.

Because of the large number of process probably involved, I
do want to use connectionless (udp) sockets, but of course
I have to implement a protocol to provide reliable data
transmission.

My question is if someone has more information
(methods to implement, or maybe even unix-sources) on this,
as I don't like to spend time on something that has already
been done.

Maybe someone could also indicate to me if I am on the right way:
I have the following idea to implement it:
- each process has a generally known (address+port) udp socket,
  which is made asynchronous.
- a write operation sends packets to process i on some host,
  causing an interrupt. In the interrupt the packet is read
  and put in a buffer (N-1 buffers per host, one for each other process).
- a read operation reads from the buffer, and blocks if not enough
  data is in it. (by doing wait(), and check after a signal (SIGIO)
  if there is enough data now)


Thanks in advance,
--
Peter Mutsaers                          email:    muts@fysaj.fys.ruu.nl     
Rijksuniversiteit Utrecht                         nmutsaer@ruunsa.fys.ruu.nl
Princetonplein 5                          tel:    (+31)-(0)30-533880
3584 CG Utrecht, Netherlands                                  

J.Crowcroft@CS.UCL.AC.UK (Jon Crowcroft) (09/10/90)

 >For a project on distributed computational physics I am
 >working on a small library to allow users easy access to 
 >N processes on different computers.

 >Maybe someone could also indicate to me if I am on the right way:
 >I have the following idea to implement it:
 >- each process has a generally known (address+port) udp socket,
 >  which is made asynchronous.
 >- a write operation sends packets to process i on some host,
 >  causing an interrupt. In the interrupt the packet is read
 >  and put in a buffer (N-1 buffers per host, one for each other process).
 >- a read operation reads from the buffer, and blocks if not enough
 >  data is in it. (by doing wait(), and > check after a signal (SIGIO)
 >  if there is enough data now)

Peter,

 i did this a coupla years ago while prototyping a multicast transport
protocol. Your main problem is reliability - you
have to re-implement all the TCP-isms that are in the kernel- since
thats what i was interested in, it didnt matter to me - it might irk
you.

if you have a taste for it, get VMTP and IP multicast code from
stanford, and install, then you have it all done for you (you do rpc's
or transactions with process groups).

alternatively, you can hack TCP to open a connection to
well known port + broadcast address (yuck)

or just install multicast IP, and change this to
well known port + multicast addr
and as SYN-ACKs come back, bind multiple connections - you then have
the socket to user space decision - do you return n lots of data from
n connections on each read, have multiple accepts + fds, or ioctl's to
de-mux each connection etc etc - someone somewhere
in california close to the heart of a certain unix did some of this, but is 
hibernating or something i believe:-).

having said that, what you are suggesting will sorta work some of the
time...

its about time someone had a good remote multi-procedure call package

 jon