[comp.sys.amiga] Compression

koleman@pnet51.orb.mn.org (Kurt Koller) (09/28/90)

Why don't the people that write the LHArc, Zip, etc stuff for the Amiga
compile a Floating-Point version and include that as well?  If this has been
done, where can I find it?
 
Thanks in advance (this is important)

Kurt "Koleman" Koller - amdahl!bungia!orbit!pnet51!koleman

jerry@truevision.com (Jerry Thompson) (10/01/90)

There was a CP/M program for the Osborne 1 which effectively allowed you to
"CD" into a .arc file as if it were a directory.  You could type, copy, and
execute the crunched files in the archive just as if they were regular
files.  All the info in the .arc was automagically unarchived before being
shown so things like file sizes were reported as the uncrunched value.
Anyone want to write a .arc (.zoo, .zip, .lzh ...) device driver?



-- 
Jerry Thompson                 |     // checks  ___________   | "I'm into S&M,
"What I want to know is, have  | \\ //   and    |    |    |   |  Sarcasm and
 you ever seen Claude Rains?"  |  \X/ balances /_\   |   /_\  |  Mass Sarcasm."

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (10/04/90)

koleman@pnet51.orb.mn.org (Kurt Koller) writes:
>Why don't the people that write the LHArc, Zip, etc stuff for the Amiga
>compile a Floating-Point version and include that as well?  If this has been
>done, where can I find it?
> 
>Thanks in advance (this is important)
>
>Kurt "Koleman" Koller - amdahl!bungia!orbit!pnet51!koleman

Maybe I'm being a bit dense, but since I make data compression one of my
specialties, I don't think so:  what in the world has "floating point"
to do with data compression?  Moreover, if there were some way to convert
the present, in their essence integer based, algorithms to floating point,
why would anyone want to slow them down that much?

There was a science fiction story once about trying to learn the meaning
of life from a mechanical oracle that could answer any clearly expressed
question.

It made the point that by the time you know enough about a subject to
ask a clear question, your question always contains its own answer.

Maybe I could answer a clearer question.  ;-)

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

greg@walt.cc.utexas.edu (Greg Harp) (10/05/90)

In article <1990Oct3.215038.8863@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>koleman@pnet51.orb.mn.org (Kurt Koller) writes:
>>Why don't the people that write the LHArc, Zip, etc stuff for the Amiga
>>compile a Floating-Point version and include that as well?  If this has been
>>done, where can I find it?
>>
>>Kurt "Koleman" Koller - amdahl!bungia!orbit!pnet51!koleman

>Maybe I'm being a bit dense, but since I make data compression one of my
>specialties, I don't think so:  what in the world has "floating point"
>to do with data compression?  Moreover, if there were some way to convert
>the present, in their essence integer based, algorithms to floating point,
>why would anyone want to slow them down that much?
>
>Kent, the man from xanth.
><xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

I think that Kurt asks his question because he's looking for a compression
program that uses the an FPU if available.  From my experience with several
of the more popular compression algorithms, I think I can safely say that
there's no use for floating point operations in data compression.

I can see one obscure, usage-dependant case where the user would be compressing
floating-point data, where a delta-Y algorithm might be useful.  (A delta-Y
algorithm basically works on the assumption that common _differences_ can
exist between consecutive numbers.  Imagine a x-y graph of a data set, with
x being the position in the data and the y being the value.  Common patterns
can sometimes exist in the differences between the current and previous y
value.  The patterns can then be compressed using your favorite algorithm.
Digitized audio data can be compressed like this, since specific values don't
normally repeat, but _differences_ sometimes do.)

HOWEVER, since most data compression occurs on files, which (of course)
consist of BYTES (be they graphics, text, floating point numbers, code, or
whatever) flops are pretty useless.  (I'd be rather tickled if someone
proved me wrong here by showing me a floating point compression algorithm.)

If you want speed in data compression, get a fast hard drive and use LZ.
:-) :-) :-)

Greg

--
             Disclaimer:  "Who me?  Surely you must be mistaken!"         _ _
"The lunatic is in the hall.  The lunatics are in my hall.        AMIGA! ////
 The paper holds their folded faces to the floor,                       ////
 And every day the paperboy brings more." -- Pink Floyd           _ _  ////  
                                                                  \\\\////
        Greg Harp               greg@ccwf.cc.utexas.edu            \\XX//

skank@du248-09.cc.iastate.edu (Skank George L) (10/06/90)

In article <1990Oct3.215038.8863@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>koleman@pnet51.orb.mn.org (Kurt Koller) writes:
>>Why don't the people that write the LHArc, Zip, etc stuff for the Amiga
>>compile a Floating-Point version and include that as well?  If this has been
>>done, where can I find it?
>>
>>Kurt "Koleman" Koller - amdahl!bungia!orbit!pnet51!koleman
>
>Maybe I'm being a bit dense, but since I make data compression one of my
>specialties, I don't think so:  what in the world has "floating point"
>to do with data compression?  Moreover, if there were some way to convert
>the present, in their essence integer based, algorithms to floating point,
>why would anyone want to slow them down that much?
>
>Kent, the man from xanth.
><xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

     I think the origional author is probably a little confused.  In numerical
analysis theory, there is a form of image compression where an image is
represented by a matrix (the matrix has a bunch of special properties), the
matrix is operated on (using floating point) to produce a second matrix.  This
new matrix is much smaller and has had image 'noise' reduced.  This process
results in an overall loss in image resolution, however it is often possible to
obtain LARGE (100x) reductions in the size of the image data with only minimal
(read: impreceptible) loss of resolution, additionally, since this is a noise
compression process the final picture will often look much better than the
origional.
                                          --George