[comp.ai.digest] Help Needed With Data Smoothing, Character Recognition

LAWS@IU.AI.SRI.COM (Ken Laws) (02/19/88)

Much of the MIT vision literature deals with data smoothing and
interpolation by fitting mathematical "thin plates" through the
image data.  The data I get is usually too smooth already, which
may be why the human vision system introduces the Mach effect.
The question is, once you have smooth data (e.g., if it were given
to you initially) what are you going to do with it?  Threshold it?
Detect edges? Segment it?  Match it to templates?  To generic models?
Take Fourier transforms?  Moment invariants?  Count concavities
relative to the convex hull?

The vision literature in graphics tends to consider only binary
data, ignoring the gray levels that high-quality scanners pick up.
There are shrink/expand techniques for smoothing and many papers
on how to characterize approximations to straight lines and arcs
on a digital grid.

You should check out the IEEE book list, particularly the pattern
recognition conferences and related books such as "Machine Recognition
of Patterns" and "Computer Text Recognition and Error Correction".
There is a very old book called "Optical Character Recognition" that
still has some good info on recognition by moments and some examples
of just how bad scanned characters can be.

					-- Ken
-------