[alt.fractals] How well does Barnsley's system perform?

cs001005@cslab9a.cs.brown.edu (Thomas Colthurst) (04/04/90)

     In the January 1988 issue of BYTE, Barnsley and Sloan ("A Better
Way to Compress Images", p. 215-223) claim 10,000 to 1 image compression
ratios.  Specifically, they claim that high-detail gray-scale aerial
photographs taking 130 megabytes can be compressed downto 13,000 bytes. 
They also show pictures of the Black Forest, a Bolivian girl, and the
Monterey coast that are encoded to 2000, 2000, and 100 bytes,
respectively, and were "based on photographs in recent issues of
National Geographic."  At SIGGRAPH '87, they showed a full sequence
video animation, A Cloud Study, which was "encoded at a ratio exceeding
1,000,000 to 1." A Cloud Study actually wasn't a pure IFS; it used IFS's
with time-varying parameters as detailed in "Blowing in the Wind:  The
Continuous Dependence of Fractals on Parameters" (Fractals Everywhere,
3.11, p. 111-117 ).
     The times required for these compressions is estimated as 100 hours
for complex color images on a Masscomp 5600 workstation (dual
68020-based systems).  Decoding takes 30 minutes.  Barnsley describes in
BYTE a custom hardware device prototype, called IFSIS, but gives no
performance characteristics.  The algorithm they use first breaks up the
image into segments using edge detection, spectrum analysis, color
separation, etc.  They then try to match these segments with a library
of fractals.  I have yet to find a detail description of how this is
done, but Discover March 1989 ("Fractals in Your Future", pg. 26-27)
shows computer screen photos of the process.  A non-automatic (requires
user to do the collage) program called Collage is described in Fractals
Everywhere (9.8, pg. 377-380).  If anyone can find a more exact
description of the automatic compression algorithm, I would love to hear
about it, as I am currently working on an evolutionary algorithm for IFS
image compression.
     A few details on the properties of IFS and IFS compressed images: 
The size of the IFS increases at most linearly with increasing detail. 
This is on of the main results of the Collage Theorem (see "Solution of
an inverse problem for fractals and other sets" by Barnsley, et al., in
Proc. Natl. Acad. Sci. USA, Vol. 83, pp. 1975-77, Apr. 1986 ).  One of
the nice things about IFSs is that they aren't just for fractals:  IFS
codes for polygons, for instance, are also very easy to construct.  As
for real world images, the examples above sound very impressive, but the
algorithm used for compression has a large influence on the ratio
acheived (i.e., I'm not completely sure that the above images weren't
hand compressed, which would give them an advantage over machine
compressed ratios).  Again, if anyone has details about the performance
of an automatic IFS compression algorithm, I would appreciate hearing
about them.

-Thomas C

patterso@deimos.ADS.COM (Tim J. Patterson) (04/07/90)

In article <1571@dftsrv.gsfc.nasa.gov> seiler@amarna.gsfc.nasa.gov writes:
>In article <3166@usceast.UUCP>, park@usceast.cs.scarolina.edu (Kihong Park) writes...
>>Has anybody seen published work by M. Barnsley(or a public demonstration)
>>corresponding to the performance of his image compression system based on
>>IFSs? Since natural images are in many instances non-fractal, I am particularly
>>interested in how his system fares w.r.t. the degree of faithfulness of the
>>reproduced image vs original image. That is, does his system of IFSs increase
>>drastically in size as more faithfulness is enforced? If there are enough 
>>responses, I will post a summary.
>
>If Barnsley can in fact produce what he claims, he may indeed have something.
>However, his methods seem to be cloaked in a veil of mystery, as he explains
>that he believes the success of his company depends on getting a working system
>on the market at least 6 months ahead of anyone else. When he was here at NASA
>Goddard over a year ago, he invited us to send a "non-expert" to his place,
>where he would demonstrate his system, allowing the visitor to twiddle knobs
>and such. He claimed that he could compress any image and retain full quality,
>__given enough processing time__. It appears that to him the quality is not an
>issue, but the time is. I believe he still has an offer where you can send him
>an image, and he will send back a compressed image and tell you how much it was


I've seen one of the images which was compressed 10,000 to one.  It
took a huge amount of time ( I vaguely recall days) to do the
compression. Overall quality was not very good but what do you expect
at that compression rate?

cs001005@cslab9c.cs.brown.edu (Thomas Colthurst) (04/07/90)

The thing that disturbs me about Barnsley's claims is not that I don't
believe that he has something, but that the 'veil of mystery' around his
system prevents
me not only from implementing a compression system, but also to doubt
that any efficient means are possible.  That is, all of the IFS
compressed images that we
have seen could have been hand-compressed.  The long time periods are
disturbing
as well -- WE DON'T EVEN KNOW THE ORDER OF THE ALGORITHM THAT DOES THE
COMPRESSION.  IFS compression may be very nice, but if the compression
algorithm is not of polynomial time ...

Does anyone know whether the literature on geometric algorithms
addresses the general problem behind IFS compression, that is, given an
arbitrary shape, find some smaller copies of the shape that can be
positioned to 'collage' it? (Smaller copies being defined as linear
transformations of)  I've seen specific cases of the problem in
brain-teaser books (two smaller rectangles can make a rectange, four
triangles can make up an equilateral triangle) I wonder whether anyone
has attacked the problem in general.

-Thomas C

sjs@roland.ctt.bellcore.com (Stan Switzer) (04/12/90)

Regarding Barnsley's method:

> I've seen one of the images which was compressed 10,000 to one.  It
> took a huge amount of time ( I vaguely recall days) to do the
> compression. Overall quality was not very good but what do you expect
> at that compression rate?

Unless the nature of the image degradation can be characterized in
some formal way, the technique will only be useful for producing
Gaughinesque renditions of scanned images.  I can see some
applications where all you need is gee-whiz graphics, but I can't see
how how it could be useful beyond that.

On the other hand, if the nature of the image quality degradation
could be formally (and publicly) described, then the technique might
be useful in a wide range of applications.  As it stands, it is
impossible to tell.

Stan Switzer  sjs@bellcore.com

cs001005@cslab9c.cs.brown.edu (Thomas Colthurst) (04/13/90)

In <22023@bellcore.bellcore.com>, sjs@roland.ctt.bellcore.com (Stan
Switzer) writes:
>Regarding Barnsley's method:
>
>> I've seen one of the images which was compressed 10,000 to one.  It
>> took a huge amount of time ( I vaguely recall days) to do the
>> compression. Overall quality was not very good but what do you expect
>> at that compression rate?
>
>Unless the nature of the image degradation can be characterized in
>some formal way, the technique will only be useful for producing
>Gaughinesque renditions of scanned images.  I can see some
>applications where all you need is gee-whiz graphics, but I can't see
>how how it could be useful beyond that.
>
>On the other hand, if the nature of the image quality degradation
>could be formally (and publicly) described, then the technique might
>be useful in a wide range of applications.  As it stands, it is
>impossible to tell.

	It's hard to tell how well Barnsley's algorithm preserves quality
because we don't have the original pictures, just the reconstructed
images.  From my inspection, though, these don't look too bad.
	The nature of the image quality degradation is formally described by
the collage theorem, but this is only a theoretical result about the
potential of an IFS compression system, specifically that increase in
image detail can be obtained by only a linear increase in size of the
compression.  A specific algorithm that uses IFS compression may not
achieve that potential, or may acheive that potential only in n^3 time
or worse ...
	The evolutionary IFS compression algorithm that I am currently working
on will be able to achieve an arbitrarily specified level of detail, but
I haven't done a time analysis yet ...

-Thomas C

patterso@deimos.ADS.COM (Tim J. Patterson) (04/13/90)

Organization: Advanced Decision Systems

In article <22023@bellcore.bellcore.com> sjs@bellcore.com (Stan Switzer) writes:
>Regarding Barnsley's method:
>
>> I've seen one of the images which was compressed 10,000 to one.  It
>> took a huge amount of time ( I vaguely recall days) to do the
>> compression. Overall quality was not very good but what do you expect
>> at that compression rate?
>
>Unless the nature of the image degradation can be characterized in
>some formal way, the technique will only be useful for producing
>Gaughinesque renditions of scanned images.  I can see some
>applications where all you need is gee-whiz graphics, but I can't see
>how how it could be useful beyond that.
>
>On the other hand, if the nature of the image quality degradation
>could be formally (and publicly) described, then the technique might
>be useful in a wide range of applications.  As it stands, it is
>impossible to tell.
>
>Stan Switzer  sjs@bellcore.com

There have been a lot of different measures proposed for compression
error analysis--rms error, edge loss, rms in the hvs space etc. and
to my knowledge nobody has a definative image quality index.  What are
you really saying when requesting a formal public measure and saying
this would make the technique useful?

	Tim

flitter@atisun.dt.navy.mil (Lance Flitter) (04/13/90)

   I believe that one of the prime areas that Barnsley's technique
is supposed to be used for is sattelite images. If a sattelite had
and IFS generator it could compress weather pattern photos and save
considerable time sending them to earth.