[comp.graphics] Reconstruction of blurred images...

dan@rna.UUCP (Dan Ts'o) (05/18/89)

	Can someone get me started on the topic of reconstructing blurred
images ? Hopefully references and available programs.
	I understand that many schemes depend on knowing the characteristics
of the blurring process, but not all. Although we can make a few guesses
about our blurring process, just how bad are we off if we don't make any
such assumptions ?
	Please email responses. Thanks.

				Cheers,
				Dan Ts'o		212-570-7671
				Dept. Neurobiology	dan@rna.rockefeller.edu
				Rockefeller Univ.	...cmcl2!rna!dan
				1230 York Ave.		rna!dan@nyu.edu
				NY, NY 10021		tso@rockefeller.arpa
							tso@rockvax.bitnet

jwl@ernie.Berkeley.EDU (James Wilbur Lewis) (05/18/89)

In article <579@rna.UUCP> dan@rna.UUCP (Dan Ts'o) writes:
>
>	Can someone get me started on the topic of reconstructing blurred
>images ? Hopefully references and available programs.

Well, one approach you might want to look into is to deconvolve the 
image and blur function.  Let's assume you know the point-spread
function (i.e. what the blurred image of a single point would look
like).  It's not hard to show that the blurred image will be the
convolution of the raw image and the point-spread function.  Your
job is now to invert the convolution to get the raw image from the
blurred image and the point-spread function.  You can do this with
Fourier transforms -- a convolution in the spatial domain is identical
to a point-by-point multiplication in the frequency domain.  So the
algorithm is to take the Fourier transform of the blurred image,
divide out the Fourier transform of the PSF, and do the inverse FT
on the result to obtain the raw image.

There is one snag.  If the transform of the PSF has zeroes in it, 
information is lost in the blurring process, and you can't perfectly
recover the raw image.  Too bad, eh?  But you probably weren't
expecting miracles anyway... :-)   I guess when you do the division
you could just skip over any (frequency domain) points where 
FT(PSF) == 0.  

Oh yeah...I guess the PSF would have to be constant over the image
for this to work.

>	I understand that many schemes depend on knowing the characteristics
>of the blurring process, but not all. Although we can make a few guesses
>about our blurring process, just how bad are we off if we don't make any
>such assumptions ?

Hmmm.  Can you fudge it by looking for isolated features in the blurred image 
which correspond to point sources in the raw image?  That way you could
read the PSF right off the image.  You could probably also get some information
out of edges....in fact, if the PSF is radially symmetric, as it ought to be
for a process like defocusing,  a single edge of arbitrary orientation is 
probably as good as an isolated point.  (sheer conjecture on my part -- 
anyone know for sure?) 

references?  hmmm...ok...(digdigdig)...how about Gonzalez & Wintz, _Digital
Image Processing_?   

I've got some nifty 2-D FFT code (in C) if you need it.

Good luck!

-- Jim Lewis
   U.C. Berkeley

phil@ux1.cso.uiuc.edu (05/20/89)

Can this process of deblurring be applied where some points have a wider
spead than others, such as is the case with limited depth of field photos?

Also, what will aperturing effects do to the process?  This is when you
have a foreground subject exhibiting lots of apertures, and the background
having some peculiar shape, such as photographing a cresent moon through
a leafy tree, out of focus.  The aperturing effect obvious distorts the
PSF of the background subject, but perhaps the background subject can help
define the aperturing pattern.

--phil

jwl@ernie.Berkeley.EDU (James Wilbur Lewis) (05/20/89)

In article <5300011@ux1.cso.uiuc.edu> phil@ux1.cso.uiuc.edu writes:
>
>Can this process of deblurring be applied where some points have a wider
>spead than others, such as is the case with limited depth of field photos?

The deconvolution algorithm I described assumes that the blurring process
is constant over the field of view.  The power at each point in frequency space
depends on the whole image, including the blurred and unblurred parts of
the image; similarly, the intensity of each spatial point depends on the
frequency spectrum as a whole.  So if you alter the frequency spectrum
in an attempt to correct the defocused background, I expect you'll end
up blurring the in-focus portions of the image.

I think you might have to eyeball the image to break it up into 
contiguous regions where the point-spread function is constant, and apply
the technique to each region.  It sounds like a real hassle -- there's
probably a better way to do it, but I don't know how.

>Also, what will aperturing effects do to the process?  This is when you
>have a foreground subject exhibiting lots of apertures, and the background
>having some peculiar shape, such as photographing a cresent moon through
>a leafy tree, out of focus.  The aperturing effect obvious distorts the
>PSF of the background subject, but perhaps the background subject can help
>define the aperturing pattern.

Now this one might be solvable, assuming the moon is perfectly focused
and you just want to get rid of the aperturing effects.  The moon, at
the image scale I think you're talking about, is a pretty high-contrast
object -- basically a uniformly bright(*), very sharply defined object, meaning
(I think!) little low-frequency information content.  I'm guessing that the 
"aperturing" effects you're talking about are some sort of mottling of the 
moon's image by out-of-focus leaves.  Since the leaves are defocused, their 
image will be missing the higher spatial frequencies.  So you ought to be
able to use a high-pass filter to seperate the low-frequency noise due
to the aperturing from the high-frequency signal from the moon.  As above,
this will wreak havoc with the rest of the image, but you could crop out the
part of the image containing the moon and just operate on that. I'd use an 
exponential roll-off instead of a "brick wall" filter to avoid ringing in the 
filtered image, and play around with different filter radii to see which 
one gives the best results.  

Geez.  Couldn't you have just moved the camera so you wouldn't have to
shoot through the trees? :-)

If any of that sounds bogus, let me know -- this is strictly handwaving, and
for all I know I could be bullshitting you blind...  I'll x-post to
sci.astro in case any of those folks want to take a stab at it.

-- Jim Lewis
   U.C. Berkeley

(*) yeah, i know about limb darkening, maria, and so on -- but a low-pass
    filter should get rid of all those unsightly blotches on your nice
    clean lunar image!

lupton@uhccux.uhcc.hawaii.edu (Robert Lupton) (05/21/89)

The problem of de-blurring images is pretty standard, and pretty hard. The
naive solution (for a constant PSF) of deconvolving by dividing in the
Fourier domain usually fails horribly. The problem is that the FT of the image
usually dissapears into noise, and the noise is amplified. If you want
to do better you have to use some constraints (such as the object is
positive everywhere, or bounded by a circle). Various techniques are
around, such as Jansson's (sp?) and Maximum Entropy. The rule of thumb in
astronomy is that you can gain about a factor of 2 in resolution.

			Robert

james@rover.bsd.uchicago.edu (05/22/89)

In article <3985@uhccux.uhcc.hawaii.edu>, lupton@uhccux.uhcc.hawaii.edu (Robert Lupton) writes...
> 
>The problem of de-blurring images is pretty standard, and pretty hard. The
>naive solution (for a constant PSF) of deconvolving by dividing in the
>Fourier domain usually fails horribly.

Agreed. One other trick is a method called "Iterative Deconvolution". If
you have an object that you can get on your image with approximately
a known shape for your projection, you can make ane "estimate" of the
actual PSF (point spread function), convolve it with the shape, and
compare the result to the image.  It is best to vary as few parameters as
possible, and to assume a general shape for the PSF (eg. a gaussian).
There are various articles in MEDICAL PHYSICS and RADIOLOGY on this
technique as applied to the blurring function of radiographic imaging
systems.

James Balter
james@rover.uchicago.edu
"If the hat fits, slice it!"

cme@cloud9.Stratus.COM (Carl Ellison) (05/23/89)

"Digital image deblurring by nonlinear homomorphic filtering"

Thomas Michael Cannon

August 1974
UTEC-CSc-74-091
Computer Science Dept.
University of Utah
Salt Lake City, Utah  84112

In his example photos, there's considerable ringing around restored features
in the moderately blurred shots (although all the blur is gone) and the
severely blurred shot (of road signs) remains unreadable.


--Carl Ellison                      UUCP::  cme@cloud9.Stratus.COM
SNail:: Stratus Computer; 55 Fairbanks Blvd.; Marlborough MA 01752
Disclaimer::  (of course)

rogerh@arizona.edu (Roger Hayes) (05/25/89)

Stuart Geman and Donald Geman,
"Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration
of Images",
IEEE Trans. on Pattern Analysis & Machine Intelligence,
PAMI-6(6) (Nov 1984): 721-741.

Consider the degraded image as the result of a stochastic process
(blurring, noise).  Use a local neighborhood process to find the
most likely initial image, given the degraded image and some assumptions
about the character of the initial image.  With examples.