[comp.graphics] Fractal Compression

wasg@rice.edu (Eric Salituro) (05/11/88)

I fail to see what the hoopla over fractal-based image compression is all
about. In the last two weeks, I've read a couple of breathless articles
proclaiming 1000 to 1 reduction but all I've seen as evidence are a couple
of low-res pictures. 
I don't know what the originals looked like, but I think we could be wowed
more effectively after seeing a comparison. I think there is a forest image
running around that doesn't look as good as Reeves' particle-based forest
from "Andre and Wally B."
Both articles claim that after the compression process "the original can be
thrown away." Isn't that a little hyperbolic? From what I understand 
(admittedly from an oversimplified description of the algorithm) the picture
is made by assembling fractal copies of the original image components. To me,
that is like copying a cake with only the original recipe. It could be a
close approximation, but not the same as the original.
Please straighten me out, it seems a little soon to call this revolutionary.

Eric Salituro
pion, Rice University

pete@octopus.UUCP (Pete Holzmann) (05/17/88)

In article <686@thalia.rice.edu> wasg@rice.edu (Eric Salituro) writes:
>I fail to see what the hoopla over fractal-based image compression is all
>about. In the last two weeks, I've read a couple of breathless articles
>proclaiming 1000 to 1 reduction but all I've seen as evidence are a couple
>of low-res pictures. 
>I don't know what the originals looked like, but I think we could be wowed
>more effectively after seeing a comparison. I think there is a forest image
>running around that doesn't look as good as Reeves' particle-based forest
>from "Andre and Wally B."
>Both articles claim that after the compression process "the original can be
>thrown away." Isn't that a little hyperbolic? From what I understand 

The article in Computer Graphics Today (April '88) is good reading. Salient
points gleaned from it:

- High compression ratios come from using very high resolution pictures,
	presumably with little information content in the extra resolution.
	At 1Kx1Kx24bits, they claim frequent reductions of 3000:1 (although
	their description seems to assume that the color table is not
	contained within the final compressed image). At 2Kx2Kx24bits of the
	same picture, they can supposedly get to 10K:1

- The compression process is slow: it takes human-interaction intelligence
	to find the compressible algorithms, on the order of 1000 hours
	per picture. That is no typo!

	They think that a PC based version that will do the compressions
	in only 1 hour. Of course, the hardware involved will cost $500,000.
	(Again, not a typo). This is a 'PC'? Would that I had a few!!!

- Decompression takes 30 minutes now on a fast box. They are hoping to
	eventually hit 20 seconds (on future parallel processor technology),
	and someday in the distant future to get 30 fps.

- The example shown DOES look rather hokey. It is pretty good for 'computer
	generated art', but nothing like real life! I can't believe they
	think this is anywhere NEAR as good as the original image. YUCK!
	Since when have you seen a mountainscape where there are only 2-3
	kinds of tree shapes, and most branches look identical, and there
	are maybe 6 shades of green in the whole forest, and......

Let's see.... take a 2K by 2K by 24 bit image. That's 12 MB. Reduce it to
	400 x 256 by 6 bits, with a 64 x 24 LUT (thats 75K now). Use
	RLL encoding for initial compression (oh... 30K I guess). Run
	it through LZW compression to get it down to 24K. I claim my method
	goes from 12MB to 24K! Amazing. 500:1 compression, *FAST* 
	reconstruction (just dither to get more resolution, it's easy!),
	and you can throw away the original! Can I have a big government
	grant too??? (:-) :-) :-) I'm sorry, I just couldn't resist!)

	I realize this research may be good basic science, and it may lead
	to useful results sometime in the distant future... BUT THESE GUYS
	ARE MILKING IT FOR ALL THE P.R. THEY CAN GET. I am not impressed.
-- 
  OOO   __| ___      Peter Holzmann, Octopus Enterprises
 OOOOOOO___/ _______ USPS: 19611 La Mar Court, Cupertino, CA 95014
  OOOOO \___/        UUCP: {hpda,pyramid}!octopus!pete
___| \_____          Phone: 408/996-7746

doug-merritt@cup.portal.com (05/18/88)

Eric Salituro writes (wrt IFS image compression):
>I fail to see what the hoopla over fractal-based image compression is all
>about. In the last two weeks, I've read a couple of breathless articles
>proclaiming 1000 to 1 reduction but all I've seen as evidence are a couple
>of low-res pictures.  [...]
>It could be a close approximation, but not the same as the original.
>Please straighten me out, it seems a little soon to call this revolutionary.

Compression ratios of *10,000* to 1 aren't all that unusual. The "close
approximation" can be made as close as is desired (e.g. exactly the same
as the original image, within the parameters of your display device). This
latter fact has in fact been proven mathematically (see "the Collage
Theorem"), which is why people are somewhat excited about it. If some
imprecision is tolerable then you can speed up the decoding somewhat.

I'm not clear on the question of the tradeoff between compression ratio
and error epsilon, but it certainly isn't as large a factor as one would
initially think.

I don't think I called it revolutionary before, but that's not a bad
word for it. Although it's pretty slow any way you look at it. But
more cpu horsepower is right around the corner. And direct IFS hardware
support is being implemented. We're going to be hearing more and more
about this technique in the coming years. Put money on it.

Be cynical if you like, but don't expect to see a proof in this newsgroup.
Go read the article in Byte if you're really interested. I had to add
a fair amount of thinking and doodling to the article, but eventually
I saw why it worked so well.
	Doug
---
      Doug Merritt        ucbvax!sun.com!cup.portal.com!doug-merritt
                      or  ucbvax!eris!doug (doug@eris.berkeley.edu)
                      or  ucbvax!unisoft!certes!doug

b39756@tansei.cc.u-tokyo.JUNET (Martin J. Duerst) (05/19/88)

In article <5546@cup.portal.com> doug-merritt@cup.portal.com writes:
>Eric Salituro writes (wrt IFS image compression):
>>I fail to see what the hoopla over fractal-based image compression is all
>>about. In the last two weeks, I've read a couple of breathless articles
>>proclaiming 1000 to 1 reduction but all I've seen as evidence are a couple
>>of low-res pictures.  [...]
>
>Compression ratios of *10,000* to 1 aren't all that unusual. The "close
>approximation" can be made as close as is desired (e.g. exactly the same
>as the original image, within the parameters of your display device). This
>latter fact has in fact been proven mathematically (see "the Collage
>Theorem"), which is why people are somewhat excited about it. If some
>imprecision is tolerable then you can speed up the decoding somewhat.
    There are two aspects of exactness involved here. The first is how
exactly the decoded image conforms to the abstract picture described by
the IFS coefficients. This depends on the time you use for decoding
(number of random iterations). The second (and bigger) problem is how
exactly the IFS coefficients describe the original picture. As a hard fact,
with e.g. 2000 bytes you can't encode more than 2**16'000 pictures, which
is a lot, but may be not enough. If you want exact encoding, you can't have
short codes for all pictures, but only a few of them.
>I'm not clear on the question of the tradeoff between compression ratio
>and error epsilon, but it certainly isn't as large a factor as one would
>initially think.
    Seems to depend on the original picture. For idealized farns and forests
without the natural irregularities, it applies.
>I don't think I called it revolutionary before, but that's not a bad
>word for it. Although it's pretty slow any way you look at it. But
>more cpu horsepower is right around the corner. And direct IFS hardware
>support is being implemented. We're going to be hearing more and more
>about this technique in the coming years. Put money on it.
    What matters here isn't the (granted) improvement of a single
technology (here: CPU speed), but the relations. The main focus of
IFS is animation and transmition over networks. For animation playback,
the volume and cost of mass storage will improve greatly, too (optical R/W
disks), so it may be much cheaper to use that storage than special IFS hardware.
The same applies for transmition over networks, where capacity and price are
improving, too. For one-to-many links (TV, etc.), full bandwith transmition
is used already, and for one-to-one links (picture telephone, etc.),
the ratio of encoding and decoding time (and the large libraries needed
for encoding) are a big problem.
    Another tradeoff aspect is the comparision with other encoding algorithms.
Many methods are available that provide reasonalble compression ratios,
are much faster, esp. for encoding, and at least as easily implementable
in hardware. They probably fit the CPU/storage cost relations much better.
    Fractal encoding is an interesting research subject, but economic
and engineering tradeoffs don't help it. Better put your money on 
something else. It could die silently.

Martin J. Duerst
Dept. of Information Science,
Faculty of Science
University of Tokyo
7-3-1 Hongo, Bunkyo-ku
113 Tokyo, Japan

ahg@k.cc.purdue.edu (Allen Braunsdorf) (05/19/88)

In article <5546@cup.portal.com> doug-merritt@cup.portal.com writes:
>
>Go read the article in Byte if you're really interested. I had to add
>a fair amount of thinking and doodling to the article, but eventually
>I saw why it worked so well.
>	Doug

If you have an Amiga and want to doodle in high style, check out the Amiga port
of my IFS decoder.  It should be in the comp.*.amiga groups soon.

It decodes at ~20000 pixels/s, so it's much faster than doing it by hand :-).

(I have a UNIX(tm) plot(3) version too, but it doesn't have all the features.)

Allen Braunsdorf			WORK	k.cc.purdue.edu!ahg
General Consultant			SCHOOL	ei.ecn.purdue.edu!braunsdo
Purdue University Computing Center	HOME	ee.ecn.purdue.edu!gawk!akb

flaig@cit-vlsi.Caltech.Edu (Charles M. Flaig) (05/20/88)

In article <1947@tansei.cc.u-tokyo.JUNET> b39756@tansei.cc.u-tokyo.JUNET (Martin J. Duerst) writes:
>    Another tradeoff aspect is the comparision with other encoding algorithms.
>Many methods are available that provide reasonalble compression ratios,
>are much faster, esp. for encoding, and at least as easily implementable
>in hardware. They probably fit the CPU/storage cost relations much better.
>    Fractal encoding is an interesting research subject, but economic
>and engineering tradeoffs don't help it. Better put your money on 
>something else. It could die silently.

There are always a few applications for which almost any improvement in 
compression ratio is worth the CPU time required to get it.  A major one
that comes to mind is the transmission of pictures from deep space probes.
Sending back 2K rather than 10K requires 1/5 as much energy, and they can
afford to take their time processing the image after it is stored in the
probes memory before transmitting it.

Whether or not such applications are a driving force in the development of
IFS technology depends a lot on the funding available.  NASA's funding isn't
too impressive of late, but if the military had a good use for such a high
compression ratio (sending images out of hostile territory?--less data means
less chance of being detected) then you can be sure of *lots* of development!

______________________________________________________________________________
  ___   ,               ,                                           ,;,;;;,
 /   Y /|              /|              Charles Flaig                ;/@-@\;
|      |/     __, ,__  |/              flaig@csvax.caltech.edu       | ^ |
|      /^\   /  | | |  /  /\  /\                                      \=/ 
 \____/|  \_/|_/\_/ \_/ \_\/_/_/_/     "What, you think they PAY me for this?"

jru@etn-rad.UUCP (John Unekis) (05/21/88)

In article <1947@tansei.cc.u-tokyo.JUNET> b39756@tansei.cc.u-tokyo.JUNET (Martin J. Duerst) writes:
>In article <5546@cup.portal.com> doug-merritt@cup.portal.com writes:
>>Eric Salituro writes (wrt IFS image compression):
>>>I fail to see what the hoopla over fractal-based image compression is all
>>>about. In the last two weeks, I've read a couple of breathless articles
>>>proclaiming 1000 to 1 reduction but all I've seen as evidence are a couple
>>>of low-res pictures.  [...]
>>Compression ratios of *10,000* to 1 aren't all that unusual. The "close
.....
>    There are two aspects of exactness involved here. The first is how
>exactly the decoded image conforms to the abstract picture described by.....

I think I must be missing something here. I understand that it is possible to 
come up with representations for graphically generated images that are very 
compact. What I get frustrated with is the imprecision of terminology used 
when talking about image compression. The filled and shaded polygon images
used in cartoons and advertisements are easy to parameterize and can be stored
in forms that require very little data to be recorded. SO WHAT?

Suppose I give you an image from the real world. An example might be a 
digitized X-ray image. An exact representation of this image is absolutely 
critical since such problems as lumps of cancerous cells may initialy show up
as one or two dark pixels on the slide. No compression algorithm which might
either remove such pixels or allow them to be created by error is acceptable.
It is simply not good enough if the compression/decompression merely preserves
major details in recognizable form. Most hospitals will not allow compression
and decompression of such images unless it can be demonstrated that the 
reconstructed image when subtracted from the original yields ABSOLUTELY ZERO
different pixels. 

After years of looking at image compression I have not seen a compression 
technique which will exceed 10 to 1 on the average image (not one specially
selected to compress well) and still be completely non-destructive. My
impression is that these esoteric techniques like fractal compression only
produce these incredible compression ratios if you use images with large
areas with little detail in them , and you aren't terribly picky about 
accuracy in the reconstructed image. These kinds of techniques may be great
for sending childrens cartoons over telephone lines, but what use are they 
in real image processing? Am I wrong? Can somebody refer me to literature
that gives a compression technique for real images that will produce better
than 10 to 1 with not even 1 pixel difference in the reconstructed image?
I seriously doubt it.

				voder!wlbr!etn-rad!jru

hutch@net1.ucsd.edu (Jim Hutchison) (05/23/88)

<>
So what we have here is a library of shapes to replace the squares given
out for quadtree methods, and a replacement for the quadtree which is
a fractal tree?  This is a first impression.  It would seem that it might
also be possible to design a similiar compression technique using L-systems.

Then you could replace the shape library with a riff library, and play it! :-)

Seriously, what advantages does the "fractal" system have over L-systems in
this case?  L-systems failed to adequately represent natural models, but this
is not a natural model.  This is a jumble of polygons which represent pools
of color (which could be extended to overlap).

Any thoughts?

    Jim Hutchison   		UUCP:	{dcdwest,ucbvax}!cs!net1!hutch
		    		ARPA:	Hutch@net1.ucsd.edu
Disclaimer:  The cat agreed that it would be o.k. to say these things.

hutch@net1.ucsd.edu (Jim Hutchison) (05/23/88)

<6603@cit-vax.Caltech.Edu> flaig@cit-vlsi.UUCP (Charles M. Flaig) writes:
>There are always a few applications for which almost any improvement in 
>compression ratio is worth the CPU time required to get it.  A major one
>that comes to mind is the transmission of pictures from deep space probes.
>Sending back 2K rather than 10K requires 1/5 as much energy, and they can
>afford to take their time processing the image after it is stored in the
>probes memory before transmitting it.

Deep space is a tough place.  The hardware has to be able to take abuse
which is still not well understood.  I have read several exciting articles
where they discuss hot-patching a space probe to get around some broken
hardware that got slammed by space.  Those must also be very fun to find.
Repeated symbol compressions (c.f. Lempel-Ziv) will work handily for the
digital data as well as the 2d data.  Strangely enough, I was also not of
the impression that probes used a frame store, or that they could afford
the mass.  Such a problem could make this a moot point.

    Jim Hutchison   		UUCP:	{dcdwest,ucbvax}!cs!net1!hutch
		    		ARPA:	Hutch@net1.ucsd.edu
Disclaimer:  The cat agreed that it would be o.k. to say these things.

b39756@tansei.cc.u-tokyo.JUNET (Martin J. Duerst) (05/23/88)

In article <522@etn-rad.UUCP> jru@etn-rad.UUCP (John Unekis) writes:
>....
>I think I must be missing something here. I understand that it is possible to 
>come up with representations for graphically generated images that are very 
>compact. What I get frustrated with is the imprecision of terminology used 
>when talking about image compression. The filled and shaded polygon images
>used in cartoons and advertisements are easy to parameterize and can be stored
>in forms that require very little data to be recorded. SO WHAT?
>
There are basically two forms of image compression, approximative and
error free (exact). See, e.g., A. Rosenfeld and A.C. Kak, Digital Picture
Processing, Second Edition, Volume 1, Chapter 5 (Academic Press, New York,
1982). If speaking about exactness, etc., it is clear that this has
to be approximate compression, so nobody mentiones this explicitly.
The reason that approximate picture processing is so widely used (in
contrast to the fact that there is nothing such as approximate text
compression) is that the digital picture (X*Y pixels of Z bits depth) is
already an approximation of an image that is, unless we descend to the 
level of photons, analog. How much 'compression' happens during
digitalization and how much during actual compression is not so important,
as long as the final (decoded) image is good enough for the specific
application.
>Suppose I give you an image from the real world. An example might be a 
>digitized X-ray image. An exact representation of this image is absolutely 
                           ^^^^^^^^^^^^^^^^^^^^
    An exact representation of the original image is impossible, digitalization
allways includes approximations. Instead of searching error-free compression
algorithms, why not digitize at twice the resolution and use an approximate
compression. This might give better compression factors with the same
ability to detect cancer (or better cancer detection with the same compression
rate).
>critical since such problems as lumps of cancerous cells may initialy show up
>as one or two dark pixels on the slide. No compression algorithm which might
>either remove such pixels or allow them to be created by error is acceptable.
There are many other components in your imageing system that can  introduce
errors and inaccuracies, so absolute precision in one step might only hide
the fact that you deal basically with inprecise information.
A cancerous cell could initially be smaller than a pixel and not visible,
but well worth to see to further the progress of medicine.
>It is simply not good enough if the compression/decompression merely preserves
>major details in recognizable form. Most hospitals will not allow compression
>and decompression of such images unless it can be demonstrated that the 
>reconstructed image when subtracted from the original yields ABSOLUTELY ZERO
>different pixels. 
That is a legal aspect. However, an overall approach considering the
whole imageing process might lead to better results, if only the regu-
lations can be changed.
>After years of looking at image compression I have not seen a compression 
>technique which will exceed 10 to 1 on the average image (not one specially
>selected to compress well) and still be completely non-destructive.
>.....
>				voder!wlbr!etn-rad!jru
Even 10 to 1 is very good, 3 to 1 or four to 1 are more usual. If you have
a good reference on an exact compression algorithm that acchieves 10 to 1,
please mail me or post it.

Martin J. Duerst
Dept. of Inf. Sc., Faculty of Sc.
University of Tokyo
7-3-1 Hongo, Bunkyo-ku
113 Tokyo, Japan

jru@etn-rad.UUCP (John Unekis) (05/26/88)

In article <1976@tansei.cc.u-tokyo.JUNET> b39756@tansei.cc.u-tokyo.JUNET (Martin J. Duerst) writes:
>In article <522@etn-rad.UUCP> jru@etn-rad.UUCP (John Unekis) writes:
>>....
>>Suppose I give you an image from the real world. An example might be a 
>>digitized X-ray image. An exact representation of this image is absolutely 
>                           ^^^^^^^^^^^^^^^^^^^^
>    An exact representation of the original image is impossible, digitalization
>allways includes approximations. Instead of searching error-free compression
>algorithms, why not digitize at twice the resolution and use an approximate
>compression. 
..... I have been working in commercial applications for image processing for
over five years and I am painfully aware that all images are only approximate
representations of reality. What am I supposed to do, tell the hundreds of 
hospitals out there that have invested literally billions of dollars in medical
imaging equipment that they have to junk it all so that they can use my nifty
new image compression technique? Should I tell NASA that if they would only 
send up a whole new generation of satellites then my new compression technique 
would work out real well? Besides, if I digitize at twice the resolution I
am starting with 4 times the data, and I have to achieve 12 to 1 compression
to get to the same storage and I/O bandwidth requirements as a 3 to 1 
compression on a lower -resolution image.
     The key phrase above was *real world*, that is environments where things
like resolution, storage size, and I/O thruput are determined by existing
equipment.

>Even 10 to 1 is very good, 3 to 1 or four to 1 are more usual. If you have
>a good reference on an exact compression algorithm that acchieves 10 to 1,
>please mail me or post it.
>
 Sorry, the 10-to-1 compression was quoted by a San Francisco area start-up
 medical equipment manufacturer. The algorithm was proprietary, and they have
 since gone out of business. I saw some of the images at a trade show, and 
 although they looked good, the claim of truly non-destructive compression
 may not have been completely true.

hultquis@pioneer.arpa (Jeff P.M. Hultquist) (05/27/88)

>>In article <522@etn-rad.UUCP> jru@etn-rad.UUCP writes:
>>
>>    An exact representation of the original image is impossible, digitalization
>>allways includes approximations. Instead of searching error-free compression
>>algorithms, why not digitize at twice the resolution and use an approximate
>>compression. 

First, because if I pay lot of time and money to digitize four (or
eight) times as many bits, I am *not* about to throw those bits away
just so I can use a "cute" compression algorithm.

Second, with exact compression and original data, I know accurate my
image is.  With some adaptive compression, I am no longer able to
trust any thing I see.  "Is that thing *data*, or did the compression
algorithm produce it?"

Jeff Hultquist		hultquis@{ pioneer.arc.nasa.gov	}
(415) 694-4970		         { ames-pioneer.arpa    }

K.N.R.Conner@newcastle.ac.uk (Kevin Conner) (11/19/90)

Hi.

I'm looking for some references on picture compression techniques
using fractals.  I would also like to hear from anyone who has
implemented any of the algorithms, especially for parallel,
shared memory machines.

I'll summarise any responses I get.

Thanks
	Kev