[comp.unix.questions] Algorithm used in compress/decompress ?

vadi@cs.iastate.edu (Vadivelu Elumalai) (04/16/91)

Recently I read about many compression techniques used for text
compression. It seems there are two methods to compress text,
dictionary method and statistical method. I want to know the 
algorithm used in the unix text compression utilities 
compress/decompress. This has to be a standard algorithm since
it is possible to decompress the text compressed by another
machine. Is there a GURU who can explain this?
Thanks.
Vadi.

--
Vadivelu Elumalai,                    U.Snail : 813, Wilson Avenue,
B-20, Atanasoff,                                Ames, Iowa - 50010.
Iowa State University,                AT&T    : (515) - 232 - 7220
Ames, IA - 50010                      E-mail  : vadi@judy.cs.iastate.edu

urban@cbnewsl.att.com (john.urban) (04/16/91)

Compress basically uses the Algorithm from IEEE Computer Vol 17, No 6 (June 1984)

Where as pack basically uses the Huffman encoding algorithm

Sincerely,

John Urban

gwyn@smoke.brl.mil (Doug Gwyn) (04/17/91)

In article <vadi.671773259@judy.cs.iastate.edu> vadi@cs.iastate.edu (Vadivelu Elumalai) writes:
>I want to know the algorithm used in the unix text compression utilities 
>compress/decompress.

It's basically the Limpel-Ziv-Welch (LZW) scheme, which is a dynamic
dictionary method that encodes variable-length "runs" of bytes.

There was a basic summary of compression methods posted just the other
day to the Usenet "comp.compression" newsgroup; if you haven't yet
expired it you should read that article.