[comp.os.minix] SCREEN proposal

meulenbr@cst.prl.philips.nl (Frans Meulenbroeks) (06/11/91)

This message proposes a standard for interfacing with a bitmapped screen
under MINIX. 

The proposal is written since graphics and graphical user interfaces are
becoming more and more used. I think a system like MINIX should also
provide at least an option to support graphics.

Note that this is only a proposal written to raise some discussion.
I do not have the code. Heck, the proposal still contains some open
ends. Major purpose of this posting is to get suggestions on these
open ends, or other improvements and the spotting of omissions.
Please do not make suggestions which do not adhere to the requirements
listed below..

The proposal is written with the following requirements in mind.

Screen handling should be
1. compatible on all platforms on which Minix runs (PC, Atari, Amiga, Mac,
   Sparc) and with all the video hardware available on them,
2. easy to use,
3. efficient.
4) small with respect to kernel growth.
5) not intertwined with the rest of the kernel, so it can easily be made
   optional.

Assumptions.
1) (positive) device coordinates will be used. The upper left 
   corner is (0, 0)
2) The code must be able to deal with color.
   It is up to the implementor whether or not color is actually
   supported.
3) the way images are stored in offscreen memory is implementation
   specific. Depending on the hardware used, different storing
   techniques are preferred. For instance using VGA it may be the best
   to store the diffent bit planes after each other. On the ST however,
   it is more convenient to store all bits of a plane sequentially
   (so complete pixels are stored).
To be specified:
1) how to specify source and or destination operands in subroutines which
   are not on the physical screen but offscreen, in memory.
   using an additional screen parameter seems the best here.
   Opinions wanted!
2) the interface with the mouse. Level 1 and 2 only (see below)

Generally speaking the whole graphics stuff can be split into at
least five different levels.
5) application programs (lets call them clients)
   (e.g. painting programs, window managers)
4) high level library functions (e.g. to display a dialog, create a
   window etc.) which can be used to write clients
3) graphic server
2) low level library functions (e.g. bitblt, draw line)
1) driver.

Level 5 is up to the application programmer.For a window
manager things like MGR may be used. At least MGR runs on the
PC in 64k + 64k mode. X is out of the question here. It is way too
large and too slow. The window manager I am thinking of compares to X 
as MINIX compares to VMS.

level 4 would be the high level library functions which perform
functions like creating window. Most of the functions in this level
interface with the graphic server process in layer 3 to get the work
done. This layer isolates the programmer from knowledge on how to
communicate with the graphic server (level 3).
Also this level could contain functions to draw higher level
objects (like pull-down and pop-up menus, buttons, scroll-bars).

Level 3 could be a graphic server process, which is a user process, wich
combines the functionality of a server (which handles simple graphic functions,
like creating windows, drawing lines, text, curves, etc., manages the
shape of the mouse pointer, the stacking order of the windows, clipping,
etc.)
Only one graphic server process should exist in the system.

The interface between the application programs and the server,
as well as between the server and the driver, should be as fast and
efficient as possible (requirement 3).
In the proposed graphical system, the server does
not need the driver. Everything is done through level 2 functions, which
itself can do almost everything directly by accessing the
frame buffer. The communication between the applications and the server
can be efficient by limiting the number of calls from client to server
and by providing a fast IPC mechanism (named pipes could do, but I'm not
sure that it'll be fast). The number of calls can be limited by providing
an extensive set of poly-xxx functions, like poly-point, poly-line,
poly-char, etc. Optionally, one could design a mechanism to let a client
deliver a number of graphic commands at the server in one time (call).

Level 2 is the level which hides the frame buffer from the server 
process. This way the server process could be written without having 
any knowledge about the underlying hardware (so the level 3 server code
could be shared between different hardware platforms).
Level 2 contains the low level graphical interface. 
I think level 2 is the compatibility level, which implements requirement 1.
I don't think requirement 1 can be implemented on level 1 due to the
wide range of hardware which has to be supported.
Also most likely the routines in this level will have to be written
in assembly in order to satisfy requirement 3.
Finally implementors can decide to use dedicated hardware (if present)
to implement this functionality.

On this level the following functions should be present:

initialise()
to initialise the library

bitblt(src_x, src_y, dst_x, dst_y, width, heigth, op)
(is nowadays bitblt the usual name or should this be called rasterop or so??)
where op can be one of
dst = src
dst = dst	/* this is a no-op */
dst = not src
dst = not dst	/* src not used here */
dst = src and dst
dst = not src and dst
dst = src and not dst
dst = not src and not dst
dst = src or dst
dst = not src or dst
dst = src or not dst
dst = not src or not dst
dst = src xor dst
dst = not src xor not dst	/* same as previous */
dst = not src xor dst
dst = src xor not dst		/* same as previous */
Or, to say it some other way there are 4 basic operations (copy, and,
or, xor), and it is possible to specify for bot src and dst whether or
not they should be inverted before doing the operation.
Further operations could be:
dst = 0	/* clear dst area */
dst = expand (src_x, src_y) /* fills dst area with pixel at (src_x, src_y) */

whenever dealing with color bitblt will operate on every bit plane

set_color_map_entry(rgb_value);
assigns a color value to the given rgb_value. If no color is available
the best fitting alternative is used. returns color.

get_color_map_entry(color);
returns the rgb value described by a certain color.

set_color_map(colormap *);
substitutes the actual color map by another one.

get_color_map(colormap *);
returns the complete color map

set_graphical_context(context pointer)
This sets the graphical context to be used on library operations
(e.g. line style, color, fill pattern ...).

get_graphical_context(context pointer)
To get the graphical context back.

draw_pixel(x, y, pixel value)

draw_line(x_from, y_from, x_to, y_to)

draw_polygon(nr_points, x1, y1, x2, y2, ...)

get_pixel(x, y)
returns the pixel value of the pixel at position x, y

get_screen_attributes(attribute pointer)
obtains information about the screen (in a struct, whose address is
passed as parameter)

I do not exactly know what should  go into this level exactly.
I prefer an orthogonal set which is not too big, but is flexible enough
to implement level 3 efficiently on.
Perhaps a function to draw part (or whole) of a circle and ellipsis
should be added here too, but I do not know exactly how to
specify the interface of this.
Suggestions please. If you have suggestions for better names, please let
me know as well.

Level 1 represents the driver.
In order to implement requirement 5, I suggest to add the screen
driver as a minor device to the tty driver.
Making the screen driver a separate task would cost more memory
(for stack etc), and would make adding/removing the driver
more complicated (changing fs/table.c, mm/table.c, NR_TASKS etc.)
Most previous implementations implement the screen as a
minor to the memory driver. However, the screen functionality
is more close to the tty than to memory. 
The change to tty.c would be very small and have the form
#if ENABLE_SCREEN
if (minor > SCREENMINOR) do_screen(message)
else
endif

where do_screen could do all the screen handling. 
this function could be implemented in a separate file.
I don't think adding cases to the read/write/ioctl etc of the tty
driver would be a good idea.

Various minor devices would be used to discriminate between different
resolutions, graphic adaptors or color vs. monochrome.
It is up to the driver implementor to decide on this.
Different systems (PC, ST), will probably use different minor numbers
for their devices. The reason various minor devices are used
instead of using an ioctl call to change the resolution is because
if you use minor devices it is possible to add a new resolution or 
device, without modifying the underlying levels. If you could
set the resolution in the server you would have to modify 
the server every time a resolution or device was added.

Standardisation on this level is not really required (remember that the
level 2 provides the standard interface). However, in order to 
obtain the same kind of behaviour and to keep the driver look
alike as much as possible it is suggested to let the drivers on all
systems have the same structure.

Only a small number of operations are supported by the kernel:

open:
claims the screen. initialises the video adaptor to the requested
graphic mode. Perhaps this also inhibits the vdu driver.
The screen is not cleared by the open operation.
If the screen is already opened and another process attempts to open
the same physical hardware though another minor device the open fails.
Otherwise weird situations can occur, since the hardware can only be
in one mode.
The effect caused by opening the screen on the existing text console screen
is implementation defined. No guarantee is given that the console is
left undisturbed.
The effects caused by I/O done to the console, while in screen mode
is unspecified.

close:
releases the screen. returns the video adaptor to its previous state.
?? does closing clear the screen??

read/write/seek: these are not used.

ioctl:
The following ioctl commands are supported:
(in the form ioctl(fd, command, arg), where fd is an open file
descriptor pointing to a screen device, command one of the
following symbolic constants, and arg depends on the command used).
The symbolic constants used for command by the screen driver are defined
in <sgtty.h>. Valid values for command are.

	SCRGETPARMS
		Return information about the video parameters to the user.
		Among the members are the number of pixels on the screen, the
		address (segment, segment descriptor) of the video memory,
		the physical dimensions of the screen and
		other information used to access the screen.
		If this command is successful, 0 is returned, -1 otherwise.

	SCRMAP
                This command is used to map the video memory to user space
		(on systems which support this).
                The argument is a pointer to a memory block in user memory.
                The memory block must be large enough to hold the entire video
                memory plus additional space to allow alignment.
                The return value is -1, and errno is set, if this operation
                cannot be performed.
                Otherwise the return value is the aligned pointer to the
		memory block.
                The screen is automatically unmapped from a process data space
                when this process terminates.
question: on systems like the ST where screen images can be put on different
	places in memory the only effect of mapping would be that 
	whenever the screen is mapped the console is not modified, while
	if the screen is not mapped, then the console is modified by
	applying graphics. On the other hand, if the screen is not mapped
	console output will overwrite the graphic screen allowing the 
	display of urgent messages (e.g. panics) from the kernel.
 
	SCRSETCOL
		sets an entry in the color lookup table to the specified value. 
		If this command is successful, 0 is returned, -1 otherwise.

	SCRGETCOL
		gets an entry in the color lookup table to the specified value. 
		If this command is successful, 0 is returned, -1 otherwise.

	SCRSETCOLMAP
		fills the color lookup table.
		If this command is successful, 0 is returned, -1 otherwise.

	SCRGETCOLMAP
		returns the color lookup table.
		If this command is successful, 0 is returned, -1 otherwise.

Maybe other ioctl calls are present to access other device registers of
the hardware. I doubt if this is required for the ST, but I cannot
speak for other hardware.

--
Frans Meulenbroeks        (meulenbr@prl.philips.nl)
	Centre for Software Technology

PPH93%DMSWWU1A.BITNET@cunyvm.cuny.edu (Thomas Heller) (06/12/91)

Frans,

Thank you for moving this discussion over here.
After thinking over the whole stuff for some days, I have some suggestions.

Level 1 (Driver):

SCRMAP should NOT be supported on IBM PC's for two reasons:
  1. It's unportable, only possible on systems with a MMU (80386).
  2. It's expensive. Mapping to user space replaces the specified buffer by the
	video memory, the buffer itself is mapped out of existance.
  3. It does not improve performance.

  The only reason to have the screen mapped is to allow access from C-code
  to the screen, which is nice while implementing and debugging the graphics
  library, but not needed thereafter.

Level 2 (Low level library functions):
  The basic primitives needed are indeed bit-block-transfer (bitblt),
  drawing lines and pixels, and handling the color attributes. Circles and
  ellipses for example can be done with these functions.
  (This may not be an efficient approach for drawing filled unregularly
  shaped objects.)

 My suggestions are:

  - Graphical contexts are handled at a higher level. Functions to draw a line
    or pixel should take the color as a parameter.

  - The 'op' parameter in the bitblt function is encoded as a 4 bit binary
    code in the following way:
    The graphic library header file defines two symbolic constants like
    'SRC' and 'DST'. These are 4 bit constants like '0101b' and '0011b'.
    The 'op' parameter for the bitblt function can be specified as any boolean
    combination of these two constants.

    Examples:
	specifying 'op' as (SRC^DST) will exor the source bitmap to the
	destination bitmap.
	specifying 0 (zero) as 'op' will clear the pixels in the destination,
	0xF will set them.
	( If you wish, you can also specify these as (DST&~DST) and
	 (DST|~DST) resp.)
	SRC will simply copy the source bitmap to the destination.

    There are 16 possible combinations of these two constants, which should
    all be covered by the bitblt function (one of them, 'DST', is really
    a NOP).

    It would be nice if the functions which draw pixels and lines also
    take the 'op' parameter, although obviously only 4 of the combinations
    make sense here: DST (is the nop again), ~DST (invert), 0 (clear)
    and 0xF (set).

  - The graphical library has to handle 'bitmaps'.
    Bitmaps are portions of memory thought as rectangular windows of a certain
    size ('width' pixels wide, 'height' pixels high, 'depth' bits deep
    depending on the number of colors/bitplanes).

    The actual memory layout of a bitmap depends on the display hardware.
    It is up to the library to hide this.

    Bitmaps may be part of the video memory, they may also be in memory
    or eventually on the disk (in temporary or permanent files).

    The graphical library header file should define a BITMAP structure
    like:

    typedef {
      unsigned short width, height, depth;
      char *data;				/* pointer to bitmap data */
      int type; 		/* SCREEN, MEMORY, DISK, maybe some more  */
    } BITMAP;

  - The drawing primitives take as an additional parameter a pointer to
    the bitmap where the action has to be performed. Bitblt obviously
    has to take two bitmap pointers.

  - Functions has to be provided to create and destroy bitmaps.

Levels 3, 4 and 5:
  No suggestions here (from me), I think this is up to the application/server
  programmer. In fact even all of these levels can be combined in one program.

----------------------------------------------
Thomas Heller, University of Muenster, Germany   <PPH93@DMSWWU1A.BITNET>

Marcelo Pazzini <PAZZINI%BRUFSM@cunyvm.cuny.edu> (06/13/91)

On Tue, 11 Jun 91 13:43:51 GMT Frans Meulenbroeks said:
>This message proposes a standard for interfacing with a bitmapped screen
>under MINIX.
  (Stuff deleted. In major part I agree, but...)
>3) the way images are stored in offscreen memory is implementation
>   specific. Depending on the hardware used, different storing
>   techniques are preferred. For instance using VGA it may be the best
>   to store the diffent bit planes after each other. On the ST however,
>   it is more convenient to store all bits of a plane sequentially
>   (so complete pixels are stored).

Okay. It is fast, but, if you store a bitmap image on disk you cannot load
it in other machines. I guess your idea is very good, but is missing some
function to convert the bitmap to a standard. This could be a "de facto"
standard already in use today.

I hope you a good luck on writing this stuff from scratch. ;-)

Marcelo Pazzini                                   (pazzini@brufsm.BITNET)

DELC - CT                                         The child is grown,
UFSM Campus                                       The dream is gone,
Santa Maria, RS                                   And I have become,
97119                                             Comfortably... teacher.
BRASIL   (BRAZIL if you're out of here!)          (D.Gilmour, diff by me)