hemant@ms.uky.edu (Hemant Rotithor) (03/09/89)
I am doing an experiment in distributed computing with the following setup. o A network of VAX stations (Ultrix with NFS). o The application program is initially started on one of the machines. o This program then starts the same application program on other(remote) machines in the background, using "rsh" command. o The machines communicate using sockets. QUESTIONS 1> How can I find the CPU time required by the different routines in the application program running on the remote machines(those not having a control terminal), can I use gprof here? 2> How do I choose between datagram and stream sockets for my setup? 3> The datagrams are said to be unreliable. How much and when are the datagrams unreliable? How is the unreliability seen by the programmer? whether they are delivered with error or not delivered at all? 4> What is a good way to debug a program that is running on the remote machine in the above setup? Please excuse me if my questions are naive. I will appreciate any responses, Thanks in advance. -- Hemant G. Rotithor (606)258-2656 * E-mail: hemant@ms.uky.edu or Elect. Engg., 206 Old Mining Building * rotithor@engr.uky.edu University of Kentucky, KY 40506-0046 I could if I should, whether I would?