[net.text] Digitized images on bilevel displays

tom@umich.UUCP (Tom Libert) (01/28/86)

We've been experimenting with different techniques for displaying
digitized images (5-8 bits/pixel) on bilevel displays (e.g. laser
printers, bitmapped terminals).  So far we've implemented ordered
dither, constrained average, a form of minimized average error
(Floyd-Steinberg algorithm), and various halftone approximation
techniques (spirals, asymmetric dots).  We've also used the
equalized histogram technique to spread contrast over the
entire range prior to applying the above algorithms.  In addition,
we've implemented a "rubber sheet" algorithm for scaling images
to any desired size.

An excellent survey article was written by J. F. Jarvis, C. N.
Judice, and W. H. Ninke.  The article, entitled "A survey of
Techniques for the Display of Continuous Tone Pictures on
Bilevel Displays," appeared in Computer Graphics and Image
Processing 5, pp. 13-40, 1976.  These topics are also treated to
some extent in "Fundamentals of Interactive Computer Graphics" by
Foley and Van Dam, and in "Principles of Interactive Computer
Graphics" by Newman and Sproull.

Of the various techniques, ordered dither is the most efficient
computationally, and produces surprisingly good results considering
the simplicity of the method.  The only drawback is the appearance
of a regular pattern in regions of constant intensity.  The
minimized average error techniques (which include the Floyd-Steinberg
algorithm) produce very nice pictures without regular subpatterns,
but tuning the black and white thresholds to obtain a pleasing
result requires somewhat more work.  The constrained average technique
is good for accentuating details, but is probably best used for
improving the appearance of digitized text and line drawings rather
than digitized photographs.

According to Jarvis et al., minimized average error should use at
least 12 neighboring cells for the average error computation, but
Floyd-Steinberg uses fewer than that (although error terms are
propagated).  Has anyone compared these techniques?  Are there any
other techniques which are worth trying?  I'm interested in comparing notes
with other people who are working with these algorithms.

Ultimately, we will provide the ability to incorporate digitized images
within TeX documents, probably using \special.

				Tom Libert
				U of Michigan, EECS
				...ihnp4!umich!tom

tom@umich.UUCP (Tom Libert) (01/31/86)

Short addendum to my earlier posting:

1) Andy Hertzfeld just told me about an algorithm known as the "Knight's
   Tour", which was recommended to him by John Warnock.  Anybody have
   pointers to this algorithm? (e.g. ACM Trans. on Graphics, or some such?)

2) I'm doing rescaling ("rubber-sheeting") using a simple 2-dimensional
   linear interpolation.  A better approximation would use a higher-order
   polynomial, but this takes significantly more time.  Has anyone looked
   into the tradeoffs involved?  This simple approach provides acceptable
   results, but you might be able to do much better.  Any thoughts?