[comp.windows.x] Multiple Screens and Pointer Tracking?

marvin@kelly.UUCP (Kyle Marvin) (09/16/89)

I am currently working on a multiple screen server implementation and
have some questions on the best way to handle pointer tracking.  Each
screen will be displayed on a separate monitor, and I'm wondering how
to handle the movement of the pointer between screens.  My first
assumption is that the desired behavior is to have it appear to the
user as if the two monitors are contiguous as the pointer moves
between them (i.e. as it moves off the left side of Screen 1, it
appears on the right side of Screen 0, and vice versa).  If I'm wrong
there, flame now and offer your own suggestion :-{)

Given that, the real rub comes in deciding how the coordinate space of
the pointer maps to the display space.  Is Screen 0 to the left,
right, top, or bottom of Screen 1?  The display capabilities of the
two screens are very different and so it could be desirable for the
different user to physically place them at different orientations from
each other (any budding young human factors guys out there care to
explain why?).

My question is not so much how to implement this, but rather what is
appropriate interface?  The way I see it, the following are possible:

	1) No interface.  Screen 0 is always assumed to be at a fixed
	   orientation from Screen 1.  Blech!

	2) Provide options such as "-top", "-bottom", "-left", or "-right"
	   to specify the orientation of Screen 1 relative to Screen 0
	   at server startup.  (But what about 3 screens...).

I'd be interested to hear from anyone else who's worked with/on a
multiple screen server or just has a strong opinion on the subject.
I'd like to be consistent with what has already been done elsewhere if
possible.  Perhaps Santa Claus might bring an early present about what
to expect in R4...

Thanks In Advance,

Kyle W. Marvin
Visual Information Technologies, Inc. (VITec)
3460 Lotus Drive 	Plano, TX 75075 	(214) 596-5600
uunet!convex!vitsun!marvin

mouse@LARRY.MCRCIM.MCGILL.EDU (der Mouse) (09/16/89)

> I am currently working on a multiple screen server implementation and
> have some questions on the best way to handle pointer tracking.

I'm not sure there is a *the* best way.  But to continue....

> My first assumption is that the desired behavior is to have it appear
> to the user as if the two monitors are contiguous as the pointer
> moves between them (i.e. as it moves off the left side of Screen 1,
> it appears on the right side of Screen 0, and vice versa).

That is one reasonable approach.  But again, calling it *the* desired
behavior seems somewhat dubious to me.

> Given that, the real rub comes in deciding how the coordinate space
> of the pointer maps to the display space.

> My question is not so much how to implement this, but rather what is
> appropriate interface?  The way I see it, the following are possible:

>	1) No interface.  Screen 0 is always assumed to be at a fixed
>	   orientation from Screen 1.  Blech!

>	2) Provide options such as "-top", "-bottom", "-left", or
>	   "-right" to specify the orientation of Screen 1 relative to
>	   Screen 0 at server startup.  (But what about 3 screens...).

> I'd be interested to hear from anyone else who's worked with/on a
> multiple screen server or just has a strong opinion on the subject.

The MIT sample server on a Sun cg4 display is a multiple-screen server.
The cg4 has a two-color plane and a full-color plane.  (It also has
another one-bit plane, but this plane is simply used to select which of
the other two planes is displayed, on a per-pixel basis.)  The sample
server treats the two-color plane as a StaticGray screen, which is not
really right: it should be PseudoColor with two colors.  Ideally,
these would be used to support two different visuals on a single
screen.  However, the sample server made a different choice: two
screens on the same monitor.  (Which one is displayed?  The one the
pointer is currently in.)

They chose to arrange the two screens so that moving off either the
left or right edge of one screen switches to the other.  Topologically
speaking, they wrapped the screens around a cylinder.  (I have on
occasion had the sample server get confused into thinking *three*
screens were available; in this case, the three screens are arranged
edge-to-edge in a loop, so that if you keep moving the mouse left (or
right) you just cycle through all three screens.  I assume the same
happens for four or more screens.)

Since you have different monitors corresponding to the different
screens, you probably don't want to do this.

Ideally, you[$] would like to be able to place each screen anywhere on
an infinite plane[%], and restrict the mouse to some subregion of this
plane.  (The subregion might be the union of the screens, or simply the
bounding box of their union, or perhaps something else.)

[$] Well, actually, this is what *I* would like to see....

[%] You probably want to forbid overlap, at least initially.  I'm not
    sure how well the X model would deal with the pointer being inside
    windows on two different screens at once, and while I can think of
    other models that allow overlap, they all seem to me as though
    they'd be hairy to implement and confusing to use.

What should the user interface to this look like?  I can see several
options.  I would want the option of fully specifying everything:
something like

-placescreen 0 0 0 -placescreen 1 wd1 0 -placescreen 2 wd1+wd2 0

ie,

-placescreen <screen-#> <x-expression> <y-expression>

where the expressions can refer to the width and height of the various
screens if necessary.  (The pointer could then be restricted to the
smallest rectangle enclosing all screens, or perhaps it simply could
not move off a boundary at a point where there's no other screen for it
to move onto, whatever.)

Of course, you might also want to provide a simpler way of specifying
things; perhaps something like `-placescreen 1 rightof 0'?  (Just make
sure it does something reasonable if the screens are different sizes :-)

					der Mouse

			old: mcgill-vision!mouse
			new: mouse@larry.mcrcim.mcgill.edu

ron@xwind.UUCP (Ronald P. Hughes) (09/17/89)

In article <8909152100.AA07252@kelly.>, marvin@kelly.UUCP (Kyle Marvin) writes:
> 	2) Provide options such as "-top", "-bottom", "-left", or "-right"
> 	   to specify the orientation of Screen 1 relative to Screen 0
> 	   at server startup.  (But what about 3 screens...).

Sounds like you need a form widget for screens %-)

Ronald P. Hughes		ron@xwind.com (or ...!uunet!xwind!ron)
CrossWind Technologies, Inc.	(408)335-4988

jfc@athena.mit.edu (John F Carr) (09/18/89)

In article <8909152100.AA07252@kelly.> marvin@kelly.UUCP (Kyle Marvin) writes:

>I'd be interested to hear from anyone else who's worked with/on a
>multiple screen server or just has a strong opinion on the subject.

The X server on the IBM RT supports multiple screens (in fact, I have
3 in front of me right now).  They can be placed in any order, but
only horizontally.  I've never felt any desire to have vertical
stacking, since I don't stack my hardware.  There are runtime options
to identify the top and bottom edges of the screen (so that the cursor
appears at the bottom after moving off the top), or the right and left
edges of the extreme screens.  If both options are selected the
display is like a torus.

There certainly should be an option to choose the order of the screens
(due to limitations of desk space and cable length, the physical order
of my screens is restricted, and I want the logical order to match).
With the RT X server, the screens are assigned in the order they are
listed on the command line.  The server lacks a method of assigning
numbers to screens independently of right-to-left order; this would be
a good addition.  

A suggestion:  If the hardware screens are labeled "a", "b", and "c",
support options like this:

	X -b 1 -c 0 -a 2

to make the screens come up in logical order "b", "c", "a" numbered
1, 0, and 2 respectively.

The Xibm multi-screen support is unlikely to change much for R4; I
haven't paid attention to any other servers.

    --John Carr (jfc@athena.mit.edu)

ama@Alliant.COM (Alan Amaral) (09/19/89)

In article <8909152100.AA07252@kelly.> marvin@kelly.UUCP (Kyle Marvin) writes:
>I am currently working on a multiple screen server implementation and
>have some questions on the best way to handle pointer tracking.  Each
>screen will be displayed on a separate monitor, and I'm wondering how
    .
    .
    .
>
>Kyle W. Marvin
>Visual Information Technologies, Inc. (VITec)
>3460 Lotus Drive 	Plano, TX 75075 	(214) 596-5600
>uunet!convex!vitsun!marvin


Here is a description of what we are doing with the Alliant multihead
X11 server.  It is very flexible as to what it allows you to do and is
easily extensible.  It was based on what I have seen with other
configuration files like termcap so there is precedent.

Basically the format of the file allows the user to specify what
hardware is on the system, what screens on what servers the hardware is
associated with, and how they are layed out.  For screen layout a compass
model is used where the user can specify that to move from one screen to
another the cursor is to be moved off of the {n, s, e, w} edge of the
first screen and it will move onto the corresponding edge of the other
screen.

The configuration file also allows the specification of different input
devices for keyboard and mouse for systems which have more than one.  It
also allows for definitions of other, non-standard devices, like knobboxes
or data tablets, etc.

Some of the definitions may not correlate strictly to what various X
documentation sources have used for some things (then again read N sets
of docs from different sources and get N different definitions of some
things), but they should be consistent within this document. (also, some
things might be more specific to our hardware/software implementation
and/or limitations but I've tried to cull those out.)

The configuration file is read at server startup time by a simple recursive
descent parser (which I'll rewrite someday using lex).

Note, our hardware configuration is limited to 4 heads per set of hardware
so you will see things like "head[3]" in places.  This could easily be
replaced by something like "/dev/cgtwo0" or "/dev/framebuffer" or whatever
your hardware looks like.



DISPLAY environment variable:

      hostname:display.screen


Definitions:

a)    Screen: A logical specification used to direct graphical output/input
      to/from a particular set of physical hardware.  The logical to
      physical mapping is specified in the configuration file.

b)    Display: A logical specification used to direct output/input
      to/from a particular set of screens.  The logical to physical
      mapping is specified in the configuration file.  Several
      screens may map to a single display.

c)    Head: A physical framebuffer and driving hardware associated logically
      with a screen.

d)    Seat: A station operated by a single user, consisting of one or more
      heasd/screens, a keyboard, and a mouse which are logically associated
      in the configuration file and via an X server.  (I though about calling
      this a workstation but that phrase is already overused...)


Configuration language description (BNF NOTATION):

        <line>        ::= <seat-elt> | <seat-elt> <comment> |
  			  <comment>
  	<seat-elt>    ::= seat[NUM]:<config-list>
	<comment>     ::= #STRING NEWLINE
  	<config-list> ::= <config-elt> | 
  			  <config-elt>:<config-list>
  	<config-elt>  ::= <kb-elt> |
  			  <mouse-elt> |
			  <input-elt> |
  			  <screen-elt> |
			  <empty> |
                          <continuation>
        <continuation>::= "\"
  	<kb-elt>      ::= keyboard=PATHNAME
  	<mouse-elt>   ::= mouse=PATHNAME
	<input-elt>   ::= input[NUM]=PATHNAME
        <screen-elt>  ::= screen[NUM]=head[NUM] |
			  screen[NUM]=PATHNAME |
                          screen[NUM].<direction>=screen[NUM]
        <direction>   ::= n | s | e | w



    Directions are default bidirectional, i.e.
	screen[0].e=screen[1]
     implies
	screen[1].w=screen[0]

    except where ambiguous, i.e.
	screen[0].n=screen[2]
     and
	screen[1].n=screen[2]

    Potentially ambiguous cases must be explicitly defined to be unambiguous
i.e.
	screen[0].n=screen[2]
	screen[1].n=screen[2]
	screen[2].s=screen[1]

This could be taken care of by using a combinational notation (i.e. ".ne"
and ".nw") and having each half or third of one screen correspond to each
of the other screens.  This would be easy to do.

    If you can leave a screen, you MUST be able to get back someway
(otherwise life can get real interesting).


Here is a sample of what the file would look like:


##############################################################################
# X11 configuration file
##############################################################################
#
# Seat 0 configuration for a sun with default keyboard and mouse devices:
#
#  associates /dev/kb with the keyboard
#  associates /dev/mouse with the mouse
#  head[0] is the only screen

seat[0]:keyboard=/dev/kb:	\
	mouse=/dev/mouse:	\
	screen[0]=head[0]

# Seat #1 configuration
#
#  associates /dev/keyboard0 with the keyboard
#  associates /dev/mouse0 with the mouse
#  screen #0 = head #4 is center (screen #1 is to the east, #2 is to the west)
#  screen #1 = head #5 is right  (screen #2 is to the east, for circularity)
#  screen #2 = head #6 is left
#  screens motion is circular
#
#  Screen layout looks like this:
#
#      	     +--------+   +--------+   +--------+
# (from 1) ->| scrn 2 |<->| scrn 0 |<->| scrn 1 |<- (back to 2)
#            +--------+   +--------+   +--------+
#  The cursor may not be moved off the bottom or top of any screen.
#

seat[1]:keyboard=/dev/keyboard0:	\
	mouse=/dev/mouse0:		\
					\
	screen[0]=head[4]:		\
	screen[0].e=screen[1]:		\
	screen[0].w=screen[2]:		\
					\
	screen[1]=head[5]:		\
	screen[1].e=screen[2]:		\
					\
	screen[2]=head[6]:

# Seat #2 configuration
#
#  associates /dev/keyboard1 with the keyboard
#  associates /dev/mouse1 with the mouse
#  screen #0 = head #1 is left  (screen #1 is to the east, #2 is to the north)
#  screen #1 = head #2 is right (screen #0 is to the east, for circularity)
#  screen #2 = head #3 is top   (screen #0 is to the south)
#  screens motion is circular
#
#  Screen layout looks like this:
#
#                   +--------+
#                   | scrn 2 |
#                   +--------+
#                    ^      ^
#		    /        \
#                  V          \
#      	     +--------+   +--------+
# (from 1) ->| scrn 0 |<->| scrn 1 |<- (back to 0) 
#            +--------+   +--------+
#  The cursor may not be moved to the south of screen #0 or #1, or to the
#  north of screen #2.  Motion to the south of #2 will always to to screen #0.
#

seat[2]:keyboard=/dev/keyboard1:	\
	mouse=/dev/mouse1:		\
					\
	screen[0]=head[4]:		\
	screen[0].e=screen[1]:		\
	screen[0].n=screen[2]:		\
					\
	screen[1]=head[5]:		\
	screen[1].e=screen[0]:		\
	screen[1].n=screen[2]:		\
					\
	screen[2]=head[6]:		\
	screen[2].s=screen[0]


Obviously many more configurations are possible, these are only a few.