[net.graphics] FFT of image in sections..

nitin@ur-laser.uucp (Nitin Sampat) (08/19/85)

From what I have reading regarding this article since I posted it, it
seems to me that I may not have communicated my question clearly enough.
Carl Lowenstein and Shep Seigel mention that taking an FFT does involve
doing it in "sections" or representing a two dimensional string of 
numbers as a one dimensional string etc..All that is fine but my question
did not refer to the FFT algorithm itself or how it manifests itself(although
that would be an interesting discussion in itself) in "sections". 

I am saying that once we have a certain FFT algorithm that we start using,
we may be limited by the core memory depending on the computer we are 
using. If we have a megabyte image and we try an FFT on it, most computers
will start paging, copying to and fro from disk since they can't fit the entire
image in core at one time.  However, they can process a smaller image almost
instantaneously(that is an image small enough to get complete processing in
core without any significant paging).  Once we determine what the physical limitof our computer is, we will know the smallest image we can process FAST.

Given this information, my question is, can we now process the megabyte image
in such sections and get any increase in speed. Also, does linear system
theory allow such a process.. because an FFT of a part is NOT the FFT of the
whole image. 

I dont think we can take a 64 X 64 image ( say ) , break it up
into 16 X 16 sub-images, FFT them and put these FFT'd sub-images together
to say that this is the FFT of the original 64 X 64 image, or can we ?

				nitin
				{allegra,seismo}!rochester!ur-laser.uucp

bob@islenet.UUCP (Bob Cunningham) (08/26/85)

There are two approaches I know of to efficiently get the FFT of a large
2D image.

The simplest approach, assuming you already have in hand an efficient 1D
FFT routine, is to take the FFTs of the original rows (or columns,
depending upon how you happen to be storing the 2D image), transpose the
resulting matrix using a fast transpose algorithm, then FFT those rows.

Depending upon what you want to do with the transformed data, it probably
won't be necessary to transpose that resultant (except in the reverse
process of doing the inverse FFT).

The second approach, more elegant, would be to develop a specialized 2D FFT
algorithm that decimates the 2D array directly, binding both 1D transforms
together (essentially butterflying partitions of original 2D array, where
the size of partitioned sub-arrays coincide with the amount of real memory
you can dedicate to the task).
-- 
Bob Cunningham  {dual|vortex|ihnp4}!islenet!bob
Hawaii Institute of Geophysics

shep@datacube.UUCP (08/26/85)

>/**** datacube:net.graphics / ur-laser!nitin / 11:18 am  Aug 19, 1985 ****/
>Given this information, my question is, can we now process the megabyte image
>in such sections and get any increase in speed. Also, does linear system
>theory allow such a process.. because an FFT of a part is NOT the FFT of the
>whole image. 
>/* ---------- */

You -can- use linear systems theory... The DFT is separable! So you
break the n*m image into m-rows and n-columns. You may then process
the more manageable 1-dimensional arrays in core. I grant you that
separability speed-ups may be easier to take advantage of with
pipelined, not-in-place FFT butterflies; but it does reduce the amount
of memory required in core -and- reduce the computron requirement.
Look at "Digital Signal Processing", Oppenheim/Schafer page 320.

I know of no way to split the image up into 2-d sections and recombine
that will yield a computron reduction.

Shep Siegel                           UUCP: ihnp4!datacube!shep
Datacube Inc.; 4 Dearborn Rd.; Peabody, Ma. 01960; 617-535-6644