[comp.graphics] Distributed GL graphics via high speed networks...

tttron@escher.lerc.nasa.gov (William Krauss) (05/14/91)

I recently discovered a new "feature" of the SGI Distributed Graphics
Library (DGL) daemon running under Irix 3.3.2 (resides on the server Iris). 
It seems as though it doesn't recognize the userid from the CLIENT machine 
when the userid's are DIFFERENT (such as CALVIN on a client Cray and HOBBES
on the server Iris).  

The userid is typically specified with the environment variable REMOTEUSER on 
the client side (e.g. setenv REMOTEUSER CALVIN).  The only way I remedied this 
problem was to use the older version of the "dgld" (3.3.1).  By the way, 
.rhosts, etc. are all set up correctly (all works fine with the OLD daemon). 

Any comments from the SGI think-tankers?

In other late-breaking news, I am using DGL from various platforms (Crays,
Convex). I am also using the DGL with an UltraNet high-speed network. 

Questions:      1) How many others out there are using the DGL??
 
                2) Anyone else using UltraNet with or WITHOUT the DGL 
                   (including their frame buffer, etc)?  If "yes" then
                   how are you using it?
                        
        
Thanks in advance (e-mail okay).        


-William



-- 
>>>> William D. Krauss			NASA Lewis Research Center  <<<<
>>>> Graphics Visualization Lab		Cleveland, OH 44135    USA  <<<< 
>>>> tttron@escher.lerc.nasa.gov(128.156.1.94)      (216) 433-8720  <<<<

tarolli@westcoast.esd.sgi.com (Gary Tarolli) (05/14/91)

In article <1991May13.191050.21842@eagle.lerc.nasa.gov>, tttron@escher.lerc.nasa.gov (William Krauss) writes:
> I recently discovered a new "feature" of the SGI Distributed Graphics
> Library (DGL) daemon running under Irix 3.3.2 (resides on the server Iris). 
> It seems as though it doesn't recognize the userid from the CLIENT machine 
> when the userid's are DIFFERENT (such as CALVIN on a client Cray and HOBBES
> on the server Iris).  
> 
> The userid is typically specified with the environment variable REMOTEUSER on 
> the client side (e.g. setenv REMOTEUSER CALVIN).  The only way I remedied this 
> problem was to use the older version of the "dgld" (3.3.1).  By the way, 
> .rhosts, etc. are all set up correctly (all works fine with the OLD daemon). 
> 
> Any comments from the SGI think-tankers?
> 

The dgld daemon calls ruserok(3N) to validate the login request.  The userid
on the server side should not matter, as its out of the picture (unless you
run the dgld server manually).  Ruserok will allow the login if the
client userid can login without a password (see the man page for the gory
details).

Now, as for your problem, even though your .rhosts etc. files did not change,
other things may have.  For example, are you now running domains?  If so,
then perhaps the 3.3.1 server was linked with a different version of ruserok()
that treated domains differently.  If you are running with domains, try
placing the full domain name in your .rhosts file: eg. foo.esd.sgi.com.

To double check things, try logging into the client as CALVIN (su doesn't
always do the trick) and then "rsh server-machine data".  If this works,
then .rhosts is set up correctly.  The only other remaining user-error
problem could be that the DGL is not using the exact userid or hostname
that you think it is.  To verify this, do

setenv DGLDEBUG 1

on the client side, rerun the program, and read the info messages - they
display the full userid and hostnames being used.  If all this checks out
then it may be a bug in the dgl server.  However, my guess is that the
older 3.3.1 dgl server was more leanient in its networked permissions,
and that a simple "magical" change to some file like .rhosts may correct
the problem.



--------------------
	Gary Tarolli

vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (05/15/91)

The easiest way to debug a .rhosts problem on an IRIS is to use the command
`rsh host -l guest env` and then to examine the values of REMOTEHOST and
REMOTEUSER to see that they match the target .rhosts file.  (Of course,
if there is no open account such as guest, you have to `rlogin host`, type
a password, and then use `env`, `printenv`, `echo $REMOTEHOST`, or whatever.)

As Gary wrote, many things can cause the remote machine to use different
values for either of those variables.  Their values are obtained
from the rsh protocol and getpeername(2) and gethostbyaddr(3).


Vernon Schryver,   vjs@sgi.com

banks@homer.cs.unc.edu (David Banks) (05/15/91)

How does the distributed graphics demon work? Are there 
multiple servers for a single database that send their
transformed polygons to a single display? How much faster
is it on the machines you are using?

tarolli@westcoast.esd.sgi.com (Gary Tarolli) (05/17/91)

In article <3869@borg.cs.unc.edu>, banks@homer.cs.unc.edu (David Banks) writes:
> How does the distributed graphics demon work? Are there 
> multiple servers for a single database that send their
> transformed polygons to a single display? How much faster
> is it on the machines you are using?

The dgld daemon works as follows:
	*) there is one daemon per dglopen connection, this usually means
		1 per GL pgm since most GL pgms don't open multiple
		connections.  Note that we are talking connections, not
		windows.  A GL pgm can open up to 256 windows per
		connection.

		A nice side effect of this implementation is that, unlike
		the X server model, one hung or busy server does not hang
		all the windows on the screen.  And the DGL server does not
		have to do a "select" call after each GL primitive.  The X
		server does, in order to offer fair time-sharing, and this
		is why most X servers cannot process more than a few
		thousand protocol request per second.

	*) Each server acts mostly as a "wire".  For each GL command it
		recieves, it simply calls the GL. If there is any data to
		be sent back to the client, the server collects this data
		and then sends it back.

So, as an example, if  you are running 2 GL pgms remotely, there will be
2 dgld daemons running - one for each GL pgm.  How much time each server
is allotted, and when, is up to the OS.

As for how much faster it is , I cannot answer cause I do not know what
you wish to compare it to...

--------------------
	Gary Tarolli

jsw@xhead.esd.sgi.com (Jeff Weinstein) (05/18/91)

In article <104738@sgi.sgi.com>, tarolli@westcoast.esd.sgi.com (Gary Tarolli) writes:
> 		A nice side effect of this implementation is that, unlike
> 		the X server model, one hung or busy server does not hang
> 		all the windows on the screen.  And the DGL server does not
> 		have to do a "select" call after each GL primitive.  The X
> 		server does, in order to offer fair time-sharing, and this
> 		is why most X servers cannot process more than a few
> 		thousand protocol request per second.

  Gary's analysis of the X server model is incorrect.  The X server
doesn't call select on every protocol request.  The X server I am
sitting in front of (4D/35 running IRIX 4.0) can process about
140,000 protocol requests per second.  This is two orders of
magnatude greater than "a few thousand".

	--Jeff

-- 
Jeff Weinstein - X Protocol Police
Silicon Graphics, Inc., Entry Systems Division, Window Systems
jsw@xhead.esd.sgi.com
Any opinions expressed above are mine, not sgi's.