[comp.lang.postscript] Important safety tip

amanda@intercon.UUCP (Amanda Walker) (06/13/89)

It seems user defined fonts (FontType 3) are a little more magical than they
appear.  Consider the following piece of PostScript code:
--------------------------
8 dict begin
	/FontType 3 def
	/FontMatrix [.001 0 0 .001 0 0] def
	/FontBBox [0 0 1000 1000] def
	/BaseFont /Courier findfont [800 0 0 1000 0 0] makefont def
	/Encoding 256 array def
	StandardEncoding Encoding copy
	/String 1 string def
	/StrokeWidth 40 def
	/BuildChar {
		exch begin
		BaseFont setfont
		String exch 0 exch put
		String stringwidth
		StrokeWidth setlinewidth
		1 setlinecap
		newpath 0 0 moveto String false charpath pathbbox
		setcachedevice
		newpath 0 0 moveto String false charpath stroke
		end
	} def
	currentdict
end /Courier-Compressed exch definefont pop
--------------------------
This creates a font that is based on Courier, but is compressed by 80%
horizontally, and with the stroke weight beefed up a little.  Pretty
simple, right?  Intending to keep this font as small as possible, I
tried removing the definition for /Encoding, but definefont objected.
So, I tried "/Encoding [] def" instead, since I don't actually use the
encoding vector for anything.  "definefont" took it, but when
I tried to show some text with it, my LaserWriter sent back "Fatal System
Error at 0Xnnnnn" where nnnnn is a hexadecimal number I didn't write
down, and then promptly rebooted itself.

Is this a known bug in PostScript 38.0?

--
Amanda Walker <amanda@intercon.UUCP>
--
"Some of the worst mistakes in history have resulted from trying to apply
methods that work fine in one field to another where they don't." -James Hogan

greid@adobe.com (Glenn Reid) (06/14/89)

In article <13-Jun-89.091841@192.41.214.2> amanda@intercon.UUCP (Amanda Walker) writes:
>It seems user defined fonts (FontType 3) are a little more magical than they
>appear.  Consider the following piece of PostScript code:

 [font deleted]

>This creates a font that is based on Courier, but is compressed by 80%
>horizontally, and with the stroke weight beefed up a little.  Pretty
>simple, right?  Intending to keep this font as small as possible, I
>tried removing the definition for /Encoding, but definefont objected.
>So, I tried "/Encoding [] def" instead, since I don't actually use the
>encoding vector for anything.  "definefont" took it, but when
>I tried to show some text with it, my LaserWriter sent back "Fatal System
>Error at 0Xnnnnn" where nnnnn is a hexadecimal number I didn't write
>down, and then promptly rebooted itself.

>Is this a known bug in PostScript 38.0?

No, it is a bug in your program.  You must have an Encoding vector of
256 elements or you can't print characters.  The fact that "definefont"
allowed you to have a null encoding is a slightly relaxed approach, and
you should not count on it working in the future (or even in the
present).

Well, to be fair, it is also a bug in PostScript 38.0, since it should
never crash with a Fatal System Error :-)

If you don't care about the Encoding, you should just do this:

	/Encoding StandardEncoding def

Since "StandardEncoding" is already defined, you end up using the
same amount (or less) memory than if you had done /Encoding [] def.
In truth, you do care about the encoding, because Courier cares about
the encoding, and you are using Courier.  You should just copy the
Encoding along with the rest of the font, and not make an exception of
it.  It looks like you were borrowing code from a re-encoding
algorithm, which naturally allocates a new encoding array, but I don't
think you need to do that in your case.  Just copy it in the "forall"
loop along with the rest of the font (except FID).

Also, there is nothing really magical about user-defined fonts.  You in
fact basically still have a Type 1 font, despite the fact that you
changed its type to 3 (since you use all the same character
definitions).  In fact, maybe you should leave the FontType to be 1.

I hope this helps.

Glenn Reid
Adobe Systems

mccoy@adobe.com (Bill McCoy) (06/14/89)

For those curious as to just exactly *why* an encoding vector is
required for all user-defined fonts...

from the Red Book, 2nd Ed., p.105:

  When a PostScript program tries to print a character of a user-defined
  font **and the character is not already present in the font cache**, the
  font machinery... executes the font's BuildChar procedure.

Given a character code in a show string, the letterform corresponding to that 
character code is determined by using the character code as an index into the
encoding vector to obtain a character name. Hence even though a Type 3 font
may never use it, the encoding vector is required for the font machinery to 
determine if a character is present in the font cache.

Also, it's worthwhile to note that recent versions *do* enforce the Red Book
requirement that the encoding vector be an array of exactly 256 objects.

Bill McCoy
Adobe Systems, Inc.

batcheldern@icosa.dec.com (Ned Batchelder) (06/16/89)

The Encoding is a required component of a font, as the table on page 95 of the red book points out. Even though the BuildChar procedure never uses it, the names in the encoding vector are used as the basis for finding glyphs in the font cache. Clearly, it would be nicer if leaving it out got you an error from definefont instead of crashing your printer.

If you want to save space in your font, replace these lines:

>	/Encoding 256 array def
>	StandardEncoding Encoding copy

with:

	/Encoding StandardEncoding def

There is no need to make a copy of the array; no one is going to be changing it anyway.

What should you do if you really don't need an encoding vector? For example, suppose you want to create a font which is just 256 different-sized circles, where the character code is used as the radius of the circle? If you don't have an encoding vector with 256 unique names, the font cache will give you the wrong glyphs. If you use StandardEncoding, there are duplicates in it, so you wouldn't get 256 different glyphs, either. (For example, the first 32 entries in StandardEncoding are all /.notdef, so all c





haracter codes 0-31 all map to the same entry in the font cache.)

If you try to create an explicit array of 256 distinct names, it takes up a lot of space in the PostScript file.

What I came up with was:

	/Encoding [ 0 1 255 { (_) 0  3 2 roll put cvn } for ] def

This uses a loop to create all 256 single-character strings, and turn them into one-byte names. The names are left on the stack where the square braces gather them into the array.

Any other cute tricks?

Ned Batchelder, Digital Equipment Corp., BatchelderN@Hannah.DEC.com

amanda@intercon.UUCP (Amanda Walker) (06/20/89)

In article <906@adobe.UUCP>, greid@adobe.com (Glenn Reid) writes:
> Well, to be fair, it is also a bug in PostScript 38.0, since it should
> never crash with a Fatal System Error :-)

Well, actually, that's mostly what I was referring to when I asked if it
was a known bug, even if I didn't end up saying it that way :-).

> Also, there is nothing really magical about user-defined fonts.  You in
> fact basically still have a Type 1 font, despite the fact that you
> changed its type to 3 (since you use all the same character
> definitions).  In fact, maybe you should leave the FontType to be 1.

Almost.  I do supply a BuildChar, which takes a character path from a
squished copy of Courier and strokes it.  Since the path gets compressed
*before* the stroke, the characters come out narrower but with even
stroke weights.  This looks a lot better than just making a compressed
version of Courier (of course, this technique only works on fonts that
are designed to be stroked rather than filled).

--
Amanda Walker <amanda@intercon.UUCP>
--
"Some of the worst mistakes in history have resulted from trying to apply
methods that work fine in one field to another where they don't." -James Hogan

andwi@majestix.ida.liu.se (Andreas Wickberg) (06/22/89)

In article <907@adobe.UUCP> mccoy@adobe.COM (Bill McCoy) writes:
>For those curious as to just exactly *why* an encoding vector is
>required for all user-defined fonts...
>
>from the Red Book, 2nd Ed., p.105:
>
>  When a PostScript program tries to print a character of a user-defined
>  font **and the character is not already present in the font cache**, the
>  font machinery... executes the font's BuildChar procedure.
>

Isn't even like this: when you define a font (with a new encoding) the
font machinery compares it's encoding vector with the vectors
belonging to the fonts already in the font cache. This is the real
'id' of a font.  When the font mechinery later checks if a character
is in the font cache it just have to look for the char code, not the
name. This would explain why the first use of Times 10 pt with the mac
encoding doesn't find the characters chached by the idle font cache.
An early bug fixed by disabling a feature?