[comp.sys.apple] ShrinkIt for the IIgs

nicholaA@batman.moravian.EDU (Andy Nicholas) (10/23/89)

In article <10743@phoenix.Princeton.EDU>, kadickey@phoenix.Princeton.EDU
(Kent Andrew Dickey) writes:

> Don't worry--Andy won't be able to screw you guys over.

I didn't consider charging $40 for a product that I've spent a fairly decent
amount of time writing (ok, a ton of time writing) to be screwing anyone
over.

> I am the author of the compression algorithm used in ShrinkIt.

Kent wrote the original code for the algorithm used in ShrinkIt.  This is
true.  I also rewrote it, debugged it, and optimized it until after 5
different rewrites, it assumed the form you see today in ShrinkIt 2.1.

The code for ShrinkIt/GS was rewritten from scratch entirely on my own with
some help from John Brooks as far as the best way to optimize the code was
concerned.  John wanted me to go for speed regardless of compression -- in 
the end I went for better compression over incredible speed.  The new variant
of LZW used in ShrinkIt/GS compresses better than the LZW found in ShrinkIt
2.1...

> But you probably don't know that--Andy is pretty sparse with his
> credits.

This was just a gross oversight on my part -- I should have given credit
where credit was due.  I sometimes get so busy that things falls between
the cracks.. this was one of them.

To remedy that, the "about" box in ShrinkIt 3.0 reads as follows:

"UnSQueeze courtesy of Don Elton.  Type-I LZW originally written by
Kent Dickey.  Type-II LZW derived from Type-I LZW and written by
Andy Nicholas."

I apologize to Kent for not putting that into ShrinkIt sooner.

> The fact that ShrinkIt is so fast is due entirely to the great
> deal of effort (over a year of intermittent hacking) to create a fast,
> and efficient compression method.

That's part of the reason -- if Kent hadn't designed the layout of the LZW
the way he had, further speed improvements in Shrinkit for the IIe/c and
GS/ShrinkIt wouldn't have been possible.  But I also spent a great deal
of time optimizing and rewriting code.  That's one of the reasons that
ShrinkIt will be at least 10% faster than ShrinkIt 2.1 when unpacking
any LZW-type I files.  LZW-type II used in ShrinkIt 3.0 is slightly faster
when unpacking.

GS/ShrinkIt is twice as fast as ShrinkIt 3.0 when unpacking and 25% faster
when packing.  This is due entirely to *MY* work, not kent's.  The fact
that it also compresses smaller than the LZW Shrinkit uses now is
entirely *MY* work, not kent's.

> Well, I have gotten very tired of Andy's handling of this whole affair,

Actually, what kent got tired of was not being recognized for designing
the code used for the compression in shrinkit...

See, when after I decided to use kent's method of doing dynamic lzw with
run-length-encoding each 4k chunk with a table clear for every 4k chunk
used, there wasn't much turning back.  No matter how many times I rewrite
the code for the algorithm or what I do to it, the code still must decode
an LZW-type 1 NuFX archive.

> and so I have created code for Andy McFadden to use in his new C version
> of ShrinkIt.  His program is written in portable C, so not only will it
> compile on the GS, but also on any larger computers you may have access
> to.

I've talked to Andy McFadden (voice) a lot lately, and progress on CShrink
(or whatever it ends up being called?  "NuKE?" :-) seems to be progressing
smoothly.

> So don't buy Andy's GS ShrinkIt--we had a verbal agreement that he would
> NEVER charge money or release my source to anyone.  

Yes we did.  The source code of which kent refers is the source code that
he wrote.  I could contend that the person to whom I gave the source
to SHRINKIT 2.00d12 (not just the core algorithm parts) received much more
than just my own code.

I gave the source code to SHRINKIT 2.00d12 to a fellow called Jay Michael
Browning who lives in Arizona.  He had pestered me many, many times for
the source code to ShrinkIt so that he could do a translation for the
Basis 108, a German Apple ][+ clone last manufactured at the beginning
of this decade (not sure when).  So, in April I finally broke down and
sent him the entire source code that I had (which was something like 36
source files).  I haven't heard from him since.  Jay Michael has an excellent
background and reputation, so I sent him the source because I trusted him.

I realize that Kent has construed our agreement to cover everything
that is "ShrinkIt-derived" -- so, I talked to kent tonight and basically
this is what is going to happen (sit up straight folks, this effects you
a whole lot):








      (1) ShrinkIt for the IIGS will be freeware.  Kent and I had an agreement
          that I feel should be kept.   This was a decision that I made
          myself.  I've certainly had considerable prodding from many
          sides in this issue, but it boils down to whether I consider
          myself to be an honorable person or not.  I do.  It will be free.


      (2) L&L Productions, the company which was going to market ShrinkIt
          is going to suffer a considerable loss due to the fact that they
          will not be marketing ShrinkIt.  They have invested a considerable
          (I'm not kidding here folks) amount of money in me (shows, hotels).
          
          To L&L Productions I also owe a significant product because I feel
          that I need to be true to my word and follow through on my
          intentions.  If I can't deliver GS/ShrinkIt to them, I can at least
          deliver something else.

          The fact that I am not going to have GS/ShrinkIt marketed
          effects me in a manner most of you will never know.   I have
          basically thrown away a bundle of money to give this to the
          Apple II community -- something that I couldn't really afford to do.
          I will not make the product shareware because I don't believe in
          shareware -- it doesn't work.  If you'd like to get a handle on
          what I've done, start calculating at $5,000 and count upwards...

          So, you can well imagine that I was under considerable pressure
          to do things either one way or another.  Because I've chosen to
          make this FreeWare, I can use all the support I can get.

          Does a decision like that effect me?  Oh yeah.


      (3) ShrinkIt for the II+ and ShrinkIt 3.0 for the IIe and IIc will
          remain freeware.


      (4) There may or may not be products coming from me which are
          freeware, or commercial.  I wrote all the code which has gone
          into GS/ShrinkIt (something like 15,000 lines -- haven't counted
          lately -- for trivia fans, the ShrinkIt 2.1 source is 380k of
          Merlin source, 48 source files, and is roughly 22,000 lines long).

          Kent and I have discussed this (not at length), and anything
          I write in the future may very well have the core parts of the
          algorithm which he designed and coded and I implemented into
          ShrinkIt and coded for GS/ShrinkIt.  Since I already owe
          L&L Productions one great product, don't be surprised to have
          some of the functions of GS/ShrinkIt show up in any commercial
          products I might write for them.


      (5) If you wish to bundle ShrinkIt, II+ ShrinkIt, or GS/ShrinkIt with
          any commercial product, you must contact me to license that
          version of ShrinkIt from me.  

          This is not a trivial matter.  Many, many people violate my
          copyright by selling ShrinkIt with another product in order to
          add value to their own product.  Damn, I'm giving away every
          version of ShrinkIt I've ever written...  the *LEAST* you can
          do is contact me if you'd like to ship ShrinkIt with your own
          product.

          To those, like John Snow, AE, and Don Elton (and others) who
          have asked my permission to ship ShrinkIt with their own products,
          kudos.  To those others: stop before I am forced to pursue
          litigation. (which I can't afford, but L&L Productions can).

          This is not something that I try to "force" upon anyone.  If you
          don't want ShrinkIt, then don't license it.


      (6) To those of you who like to quote me out of my own manual for
          ShrinkIt and point out that I said that GS/ShrinkIt would be FreeWare
          and then said it would be commercial (and now say it will be
          FreeWare) -- don't.  There's nothing legally binding about stuff
          that I've written.  I'm human and have (in America anyway) the
          freedom of choice.

          I choose to make GS/ShrinkIt free.

          I don't want to see everyone bitching anymore.


> Well, he has now succeeded in doing both.

I think I just explained that whole part.  I've reversed my own decision at
a very high cost to myself to do the honorable thing.  GS/ShrinkIt will be
FreeWare.  Rejoice!  :-)

> Sure, he has added a great deal to the program other than just my algorithm,

Yes, with 380k of source I should certainly HOPE that there is something else
in ShrinkIt other than just the compression algorithm.

Now, we'll get down to the nitty-gritty of all of this -- it's not kent's
algorithm.

No one "owns" an algorithm unless... in the United States and most other
western nations the only recognized way to protect an idea is to PATENT it.
Kent does not have a patent on LZW nor do I. (Nor does anyone, I believe).

I also believe the variation (do RLE on 4k, then do LZW on that 4k, clearing
the table every 4k chunk which is done, always checking to see if the smallest
chunk has gotten smaller... if it hasn't, use the previous uncompressed chunk)
of LZW that kent designed to be un-Patentable.

Basically, the LZW variant which Kent designed keeps parts of the file from
expanding beyond their original size -- so, the net compression is not those
parts of the file which grow and those that shrink, it's the parts of the file
which shrink because nothing is allowed to expand since if a 4k chunk ends up
being larger than the original image that was "compressed" (in this case,
expanded) the original uncompressed (and smaller) image is output to the
destination file).

Type-II LZW goes a step farther to implement a clear of the hash-table ONLY
when the table fills up.  This allows better compression to occur on
stuff that compresses well with LZW: text files.

At one point in our dealings with each other (Kent and I), talk arose about
"buying" the rights to "kent's algorithm" -- to which I had to explain that
kent doesn't *HAVE* any rights to the algorithm because it's also based upon
existing algorithms and he doesn't have a patent on the work that he did.

> but he has been dishonest about it the whole time.

Dare I say "no"??  I don't feel that I have been dishonest.  I felt that by
rewriting the algorithm from scratch for GS/ShrinkIt and enhancing it that
I was keeping within at least the spirit of our agreement.  Kent, however,
obviously felt differently -- and now, so do I.

As a sordid footnote, I thought I would mention that I did get ahold of the
copies of this article which were floating around which had the sentence
"And if he ships one copy, I just may sue him."  attached to the end of the
article.  For a while, considering that certain parts of this message could
be considered libelous, it was very much the opposite.

I will not be bullied into doing something I do or do not want to do.  If
I decide to do something, it will be because I damned well want to do 
something.  Right now if anyone wants me to charge for GS/ShrinkIt, it'll
be one cold day before I change my mind (again, sigh).

> 			Kent Dickey

Andy Nicholas

shrinkit@moravian.edu

matthew@sunpix.UUCP ( Sun Visualization Products) (10/24/89)

Okay, Andy has declared that Shrinkit/GS will be freeware, but let's show our
support for a person that has made a very large contribution to the Apple
community with the entire Shrinkit line during the past year.

I'm challenging all Shrinkit users to contribute some financial recompense to
Andy for his hard work.


To Andy:

     I'll be sending a check for $10.00 my next payday.  Enjoy a Hot pizza 
and Cold drink on me.  (Just don't eat you drink over your keyboard)


-- 
Matthew Lee Stier                            |
Sun Microsystems ---  RTP, NC  27709-3447    |     "Wisconsin   Escapee"
uucp:  sun!mstier or mcnc!rti!sunpix!matthew |
phone: (919) 469-8300 fax: (919) 460-8355    |

nicholaA@batman.moravian.EDU (Andy Nicholas) (10/25/89)

In article <949@friar-taac.UUCP>, matthew@sunpix.UUCP
(Sun Visualization Products) writes:

> Enjoy a Hot pizza 
> and Cold drink on me.  (Just don't eat your drink over your keyboard)

Actually, I've got a fridge in my lab here at college, and we make soda
runs about once a week for 4 gallons of soda... Grapefruit soda seems to be
my favorite soda right now.

Many of us seem to live through Dominoes Pizza as well... :-)

andy

-- 
Andy Nicholas                       InterNET: shrinkit@moravian.edu
Box 435                                 uucp: rutgers!liberty!batman!shrinkit
Moravian College                       GEnie: shrinkit
Bethlehem, PA  18018          America Online: shrinkit        CIS: 70007,3141 

gwyn@smoke.BRL.MIL (Doug Gwyn) (10/25/89)

In article <401@batman.moravian.EDU> nicholaA@batman.moravian.EDU (Andy Nicholas) writes:
>          I choose to make GS/ShrinkIt free.

Thanks!

>Type-II LZW goes a step farther to implement a clear of the hash-table ONLY
>when the table fills up.  This allows better compression to occur on
>stuff that compresses well with LZW: text files.

There is clearly room for substantial improvement in data compression
technology.  When I was implementing a LZW decoder for a GIF-to-rational
image translator, I discovered several places in the GIF LZW algorithm
(apparently borrowed from the BSD UNIX "compress" program) where better
compression was attainable.

There is at least one good textbook on data compression (I don't remember
the authors and the book is not at hand); perhaps a concerted effort
should be made to devise the best feasible compression algorithms first
and implement them after we've completely solved the problem.  There are
several existing candidates for "good enough" compressers while this
problem is being researched.

nicholaA@batman.moravian.EDU (Andy Nicholas) (10/25/89)

In article <11408@smoke.BRL.MIL>, gwyn@smoke.BRL.MIL (Doug Gwyn) writes:

> There is clearly room for substantial improvement in data compression
> technology.  When I was implementing a LZW decoder for a GIF-to-rational
> image translator, I discovered several places in the GIF LZW algorithm
> (apparently borrowed from the BSD UNIX "compress" program) where better
> compression was attainable.

But usually you'll find (I think kent can back me up) that those incremental
improvements usually only increase the compression ratio by a fractional
part of a percent and don't amount to much unless you make a lot of
modifications... in which case your algorithm hasn't been 'tweaked,' it's
been rewritten.  :-)

> There is at least one good textbook on data compression (I don't remember
> the authors and the book is not at hand);

J.A. Storer, Data Compression: Methods and Theory, Computer Science Press, 1988
R. Sedgewick, Algorithms, 2nd ed., Addison-Wesley, 1988

> perhaps a concerted effort
> should be made to devise the best feasible compression algorithms first
> and implement them after we've completely solved the problem.

Whoa!  CS guys have been trying to do things like this for almost 40 years
now.  There are many really decent algorithms out there (LZhuf, LZss, LZ,
LZw, LRU, Huffman, etc) that some people just haven't bothered writing for
one machine or another.  To think that we should wait until we've solved
the problem is like waiting to treat cancer while you await its cure.  It's
not gonna work... :-)

> There are
> several existing candidates for "good enough" compressers while this
> problem is being researched.

Yup, LZSS, LZHUF, LZARI (gads is this slow), Arithmetic Compression (this too),
LZW, LRU, Huffman, and a host of others have already been developed.  I think
that to wait until some sort of wonderful 'research' on the promise of better
data compression it would be wiser to try to implement the more powerful
compression algorithms available today for a given type of cpu.

The 3 most powerful that execute in a reasonable length of time that I'm
aware of (throw in more if you know of any) are LZHUF, LZARI (ok, maybe this
is a little unreasonable...) and LRU.  I haven't timed anything with LRU,
but they claim its 10-25% faster than LHARC and 10-15% smaller than LHARC's
output (which is LZHUF).  If this is true, it's not too shabby.  Then again,
the downside is that these algorithms will NEVER be implemented on a IIe/c
because the data structures which LRU requires take more than 64k all by
themselves.  LZHUF is doable.. I think I once calculated that they would
require 28k of data structures to run, and so could be backfitted into the
8-bit version of ShrinkIt at least to be decoded.

If anyone knows something I don't, pitch in...

andy

-- 
Andy Nicholas                       InterNET: shrinkit@moravian.edu
Box 435                                 uucp: rutgers!liberty!batman!shrinkit
Moravian College                       GEnie: shrinkit
Bethlehem, PA  18018          America Online: shrinkit        CIS: 70007,3141 

neilh@pro-sol.cts.com (Neil Haldar) (10/26/89)

In-Reply-To: message from rti!sunpix!matthew@mcnc.org

        I agree. As soon as I get my copy, a check will be in the mail. Thanks
for providing the Apple world (once again) with a very valuable product.
                                                        -Neil