[comp.protocols.tcp-ip] gettimeofday

mjhammel@Kepler.dell.com (Michael J. Hammel) (12/20/90)

I want to time read()'s from a socket.  The code went something like this

	gettimeofday(startime,0)
	read(socket, buf, length)
	gettimeofday(endtime,0)

This didn't work because the granularity was not small enough (starttime
equaled endtime).  I then changed the timing to be outside of a loop
which was reading in a requested amount of data (the read was inside
this loop).  This included a poll() which waited till I had data in the
socket.  This worked a little better and only occassionally were the
time values equal.  However, on nearly every test of this, after some
seemingly random period, its almost guaranteed to give me an endtime
*less* than the starttime!  I put in some debug code which proved this. 
I think its because the gettimeofday() call is reading the sec's and
microsec's at a point where the clock has not updated the sec's value
after the microsecs value has rolled over.  Is this right?  How can the
endtime be less than the starttime?  Is there anyway to prevent this
from happening?  Is there a better way to time what I'm trying to time? 
There's an itimer() call in V4 (I think) but thats apparently not
available in V3.2.

Michael J. Hammel        | mjhammel@{Kepler|socrates}.dell.com
Dell Computer Corp.      | {73377.3467|76424.3024}@compuserve.com
#include <disclaim/std>  | zzham@ttuvm1.bitnet | uunet!uudell!feynman!mjhammel
#define CUTESAYING "Your cute quote here"