[comp.graphics] Chaotic Compression

scott@applix.UUCP (Scott Evernden) (11/13/87)

This months issue of "Computer Graphics World", in the
"Graphics Technology" section, describes an image compression
technique called "Chaotic Compression" which is supposed to
yield exact image compression ratios as high as 1,000,000 to 1.

The article attempts to hint at the algorithm, but winds up
not making any sense.  Does anyone have any information?

Along the same lines, HyperCard utilizes some new image compaction
algorithms yielding (I think) 200-1 compression.  Does anyone know what
Atkinson's trick is?

-scott

bc@mit-amt.MEDIA.MIT.EDU (bill coderre) (11/13/87)

About Hypercard compression:

First off, it is soon to be patented, so you will be able to read the
algorithm soon. No one is currently allowed to talk about it, for the
specific reason that this might invalidate the patent claims.

Second, Hypercard "cards" are 1 bit pelmaps 512x342.

Third, compression peaks at 30:1, for "normal" images. Thus, a card,
which is 28k uncompressed, averages 1k-2k compressed.

Fourth, "standard" compressions (like they use in FAX transmission)
average 5:1 or maybe even 10:1 if you are lucky.

Fifth, one of the features is that compressed cards are very fast to
uncompress. A MacII can display 5-10 cards per second, including
setting up all the data structures to actually use the card, if the
cards are in memory.

DISCLAIMER: I worked for Apple this summer, and signed those horrible
non-disclosures. I now work for MIT as a graduate student. The above
does not represent the opinions of anybody but me. OK, OK, OK, OK???bc

roy@phri.UUCP (Roy Smith) (11/14/87)

In article <619@applix.UUCP> scott@applix.UUCP (Scott Evernden) writes:

	I couldn't derive the math, but one can calculate the amount of
information in a given image (quantified in bits) and it is theoretically
impossible to compress it past that.  How close you get to that ideal
depends on the compression algorithm you use.

	We often use compress 4.0 (you know, the one that comes with 2.11
news) to compress Sun screendumps and get 99.9+% compression.  Of course,
the images we are compressing are very sparse (typically the result of a
line drawing in a full-screen tektool window).  It is interesting to note
that if you take one of these screendump files and run it through "od"
(octal dump; turns each 8-bit byte into a 3-digit octal number), and
compress the output, you get almost exactly the same size file as if you
compressed the original bitmap.  Makes sense, considering that the amount
of information hasn't changed any.
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, Nara )y f

jru@etn-rad.UUCP (John Unekis) (11/15/87)

In article <619@applix.UUCP> scott@applix.UUCP (Scott Evernden) writes:
>This months issue of "Computer Graphics World", in the
>"Graphics Technology" section, describes an image compression
>technique called "Chaotic Compression" which is supposed to
>yield exact image compression ratios as high as 1,000,000 to 1.
>
>The article attempts to hint at the algorithm, but winds up
>not making any sense.  Does anyone have any information?
>
>Along the same lines, HyperCard utilizes some new image compaction
>algorithms yielding (I think) 200-1 compression.  Does anyone know what
>Atkinson's trick is?
...
The Computer Graphics World article is rather slim on details, but 
from what I gather , the million-to-one algorithm only works on graphics
images. If you have an object which has well defined borders and is 
filled with relatively random data inside the borders then you simply
encode the borders as vectors , and the expansion algorithm will draw 
the borders then walk around inside of them filling the image with
random data. Big Deal. Useless for practical applications.

As for the 200-to-1 compression, this is an old trick. If this is what
I think it is, then they cant really perform compression like this on
any given arbitrary image. Usually these algorithms work on a sequence
of VIDEO images. The trick is that after transmitting frame N , you 
subtract frame N from frame N+1, leaving only the differences between
the two frames. Run-length encoding of the difference image should produce 
very good compression, and all the receiving side has to do is expand
the difference image and add it to what is currently on the screen. The
triple digit compression ratios are achieved by aiming the video camera
at an inanimate object like a flower pot. The differences between images
are then almost all zero, except for noise in the video camera, and will
run length encode extremely well. If anything in the image should have the 
audacity to actually move the compression drops to single digit figures
instantly. If you are using a scheme like this to send video over a 
fixed bandwidth comm line, every time something moves you get blurring
and ghosting which takes many frames to settle down.

In general There Aint No Such Thing AS A Free Lunch. Information preserving
compression schemes that work on single arbitrary gray-scale images almost
never produce over 3 to 1 compression without using a weeks time on a CRAY.
Non information preserving algorithms wont go much over 10 to 1 on a real
(not rigged for the algorithm) image without turning it to MUD. The best 
compression algorithm for general purpose images that I have seen yet is
the National Imagery Transmission Format from the DIA. It will let you
convert an 8 bit per pixel image into 4.5b/p, 2.3b/p, 1.4b/p, 0.75b/p,
or 0.5b/p depending on how much degradation you can stand. If anyone
has seen a REAL algorithm that works better than this, please let me know.

thomas%spline.uucp@utah-gr.UUCP (Spencer W. Thomas) (11/16/87)

In article <305@etn-rad.UUCP> jru@etn-rad.UUCP (John Unekis) writes:
>Non information preserving algorithms wont go much over 10 to 1 on a real
>(not rigged for the algorithm) image without turning it to MUD.
> If anyone
>has seen a REAL algorithm that works better than this, please let me know.

Dr. Thomas Stockham and his colleagues, of the EE and CS departments
here at the University of Utah, have been developing vector
quantization algorithms for image compression that can (currently)
reach as little as 0.1 bit/pixel (from an original 8 bit image), a
compression ratio of 80 to 1.  The resulting picture looks a little
like what you get from a VHS video recorder on the slow speed.  It is
certainly not "MUD". 

He hopes to be able to get below 0.01 bits/pixel for motion sequences.

=Spencer   ({ihnp4,decvax}!utah-cs!thomas, thomas@cs.utah.edu)