[comp.soft-sys.andrew] Font sizes in e.g. "fontdesc_Create" - pixels, points, or other?

guy@auspex.auspex.com (Guy Harris) (12/09/89)

Is the "size" argument to "fontdesc_Create" to be considered a size in
pixels, points (or some other resolution-independent units), or
something else?  I.e., does it ask for characters some number of screen
quanta in size, some number of resolution-independent units in size, or
something else?

zs01+@ANDREW.CMU.EDU (Zalman Stern) (12/10/89)

In theory, the value is supposed to be in points. The implementation is
such that if you ask for a 12 point font, you will get whatever your window
system has decided to call foo12 . If ATK gets ported to a system with
on the fly font generation (e.g. Display PostScript or NeWS), the correct
interpretation of fontsize in points can be done.

Sincerely,
Zalman Stern

guy@auspex.UUCP (Guy Harris) (12/19/89)

>In theory, the value is supposed to be in points. The implementation is
>such that if you ask for a 12 point font, you will get whatever your window
>system has decided to call foo12 .

Actually, the implementation running here is such that, if you ask for a
12 point font, it'll first look for fonts matching:

	*-*-foo-medium-r-normal-*-*-120-*-*-*-*-*-*

(I think I got that right :-)); my question was more-or-less whether it
should ask for that, or for

	*-*-foo-medium-r-normal-*-12-*-*-*-*-*-*-*

i.e., should it ask for the font by PixelSize, or by PointSize (in
decipoints)?

>If ATK gets ported to a system with on the fly font generation (e.g.
>Display PostScript or NeWS), the correct interpretation of fontsize in
>points can be done.

I think the issue isn't one of whether you have outline or bit-mapped
fonts (as I interpret the X Logical Font Description proposal, with a
sufficiently large set of bit-mapped fonts you can at least try to ask
for a particular real live point size, given that you know the display's
resolution - which X will let you find out); it's one of whether the
units in which you draw are to be considered pixel units or real-world
geometry units.

For instance, I suspect the units used by the methods for "graphic" are
pixel units, not, say, inches.  So, in order to draw a 2-inch line,
you'd need to know the resolution of the display in pixels-per-inch.  If
there were a way of finding that out, some application like "zip" could
think internally in inches and only drop down to pixels when it came
time to actually draw lines.  If applications worked like that, it'd be
appropriate to have font sizes be interpreted as points or some other
geometric units, not pixels.

However, if applications draw lines, etc. in pixels, I suspect it'd be
inappropriate for them to think of fonts in points, since, if points
really *do* mean points, not all elements of a drawing would scale the
same way when you move from a screen with one resolution to a screen
with another resolution.

There are some places, e.g. styles, where you can work in geometric
units; however, the code that goes from e.g. inches to pixels in the
"style" code thinks there are 72 pixels per inch, period (cf. CVDots in
"atk/support/style.c").  Given that, at present the conversion between
geometric and pixel units is fixed, and has nothing to do with the
resolution of the screen, so pixels and points are the same thing.

Has any thought been given to what the "graphic" layer would do under,
say, a window system with a PostScript imaging model?  Would it set up
the transformation matrix so that it worked in pixels, or would it use
the "default" matrix (with units of 1/72 inch), or something else?  Is
the "proper" notion that the units used by, say "graphic" are pixels or
1/72 inches?