[comp.lang.c] Value of microeffiency

jfh@rpp386.UUCP (John F. Haugh II) (06/30/88)

In article <818@garth.UUCP> smryan@garth.UUCP (Steven Ryan) writes:
>         It is very difficult to find a general rule to best pack a structure.
>One of the easiest is to throw whole mess at the compiler and say gimme your
>best shot.

i don't know what value microefficiency has this week, but in general,
writing good solid algorithms is what is important.  does anyone still
use profil anymore?  what about size?

- john.
-- 
John F. Haugh II                 +--------- Cute Chocolate Quote ---------
HASA, "S" Division               | "USENET should not be confused with
UUCP:   killer!rpp386!jfh        |  something that matters, like CHOCOLATE"
DOMAIN: jfh@rpp386.uucp          |             -- with my apologizes

bill@proxftl.UUCP (T. William Wells) (07/04/88)

In article <3401@rpp386.UUCP>, jfh@rpp386.UUCP (John F. Haugh II) writes:
> i don't know what value microefficiency has this week, but in general,
> writing good solid algorithms is what is important.

Just so you know: around here, what you are calling
microefficiency makes the difference between staying in business
or not. There is no argument against the idea that a good
algorithm is the proper base from which to start (though there is
certainly room for discussion as to what constitutes a good
algorithm). But, a good algorithm is ONLY THE BEGINNING.

Once you have an algorithm, you must IMPLEMENT it. BOTH are
necessary. And as important. It is not either/or.

Some people argue against the notion that altimately, resource
consumption is irrelevant to what constitutes a good algorithm.
Their "reason" is that processors get better, memory gets
cheaper, disks get faster, etc.

Actually, what happens is this:

	Consumption expands to the limits of resources.

So, I might accept that the days of PC's with less than a meg
have gone (and yes I know that that is BS; but, just for the
sake of the argument), but that does not mean that our spelling
checker can get larger. Oh no.  What that means is that our
customers expand their product to use the extra memory. Should
our spelling checker get significantly larger, they will go to
another vendor.  And should it be slower than another's...

Anyone who thinks that "microefficiency" is irrelevant either is
not in the real world or has customers who have indefinitely
deep pockets (or who just do not understand the problem, but
time will cure that ignorance).

>                                                      does anyone still
> use profil anymore?  what about size?

Profilers are essential tools around here; so are any other
performance measurement tools that are appropriate to the task.
Including size.

jfh@rpp386.UUCP (John F. Haugh II) (07/08/88)

In article <416@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>In article <3401@rpp386.UUCP>, jfh@rpp386.UUCP (John F. Haugh II) writes:
>> i don't know what value microefficiency has this week, but in general,
>> writing good solid algorithms is what is important.
>
>        There is no argument against the idea that a good
>algorithm is the proper base from which to start (though there is
>certainly room for discussion as to what constitutes a good
>algorithm). But, a good algorithm is ONLY THE BEGINNING.

er, just the same way the in order to build a strong building you have
to have a solid foundation.  but that is, after all, only the beginning.

if i write a sort routine with O(n*ln n) and you write one O(n**2),
how much ``micro optimization'' is it going to take to outperform my
poorly implemented, but superior time complexity, algorithm?  if your
idea of a really swell sort algorithm is a bubble sort, no matter how
much optimizing you perform you are going to lose.

even a highly optimized bad algorithm is still a bad algorithm.

- john.
-- 
John F. Haugh II                 +--------- Cute Chocolate Quote ---------
HASA, "S" Division               | "USENET should not be confused with
UUCP:   killer!rpp386!jfh        |  something that matters, like CHOCOLATE"
DOMAIN: jfh@rpp386.uucp          |             -- with my apologizes

chris@mimsy.UUCP (Chris Torek) (07/08/88)

In article <3693@rpp386.UUCP> jfh@rpp386.UUCP (John F. Haugh II) writes:
>if i write a sort routine with O(n*ln n) and you write one O(n**2),
>how much ``micro optimization'' is it going to take to outperform my
>poorly implemented, but superior time complexity, algorithm?  if your
>idea of a really swell sort algorithm is a bubble sort, no matter how
>much optimizing you perform you are going to lose.
>
>even a highly optimized bad algorithm is still a bad algorithm.

This is true, but be careful: theoretical average performance on
theoretical average data is not the same as actual performance on
actual data.  For instance:

	swap(a, m, n):
	    t <- a[m]; a[m] <- a[n]; a[n] <- t;

	bsort(a, n):
	    do
		sorted <- true;
		for i in [0..n-1) do
		    if a[i] > a[i+1] then
			swap(a, i, j);
			sorted <- false;
		    fi;
		rof;
	    until sorted;

versus a quicksort of the form

	qsort(a, l, h):
	    if l >= h then return; fi;
	    i <- l; j <- h;
	    do
		while a[i] < a[h] do i++;
		while --j > i and a[j] >= a[h] do null;
	    while (i < j and [swap(a, i, j); true]);
	    if i < h then
		swap(a, i, h);
		qsort(a, i + 1, h);
	    fi;
	    qsort(a, l, i - 1);

In the `usual' case, qsort wins by great leaps.  But what if the
array is already sorted?  (There is a fix for qsort, involving
choosing a median partition element, but then suppose that n<4:
bsort will still win, on sheer simplicity.)

    ------------------------------------------------------
    Always consider actual data when analysing algorithms.
    ------------------------------------------------------

In practise, I have used a routine which either does an insertion
sort or a quicksort, depending on how scrambled the data are
expected to be.  Profiling showed that this worked quite well.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

ok@quintus.uucp (Richard A. O'Keefe) (07/11/88)

In article <12369@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes:
>This is true, but be careful: theoretical average performance on
>theoretical average data is not the same as actual performance on
>actual data.  For instance: [bsort -vs- qsort]
>In the `usual' case, qsort wins by great leaps.  But what if the
>array is already sorted?
>In practise, I have used a routine which either does an insertion
>sort or a quicksort, depending on how scrambled the data are
>expected to be.  Profiling showed that this worked quite well.

There is no substitute for checking the literature.  According to all
three of Knuth V3, Melhorn, and Sedgewick, the average case of quick-
sort does about 1.4 times as many comparisons as the *worst* case of
merge-sort.  Merge sort is stable,  which means that sorting with a
sequence of keys is easy.  What is more, the natural merge exploits
existing order with the utmost ease.

bill@proxftl.UUCP (T. William Wells) (07/11/88)

In article <3693@rpp386.UUCP>, jfh@rpp386.UUCP (John F. Haugh II) writes:
> In article <416@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >In article <3401@rpp386.UUCP>, jfh@rpp386.UUCP (John F. Haugh II) writes:
> >> i don't know what value microefficiency has this week, but in general,
> >> writing good solid algorithms is what is important.
> >
> >        There is no argument against the idea that a good
> >algorithm is the proper base from which to start (though there is
> >certainly room for discussion as to what constitutes a good
> >algorithm). But, a good algorithm is ONLY THE BEGINNING.
>
> er, just the same way the in order to build a strong building you have
> to have a solid foundation.  but that is, after all, only the beginning.

Right. And remember that for buildings of any size, the stuff
built on the foundation requires more time and effort to build
than the foundation does. This is not intended as a defence of
the false notion that efficiency is more important than a good
algorithm, but rather to show that the analogy is not relevant.

> if i write a sort routine with O(n*ln n) and you write one O(n**2),
> how much ``micro optimization'' is it going to take to outperform my
> poorly implemented, but superior time complexity, algorithm?  if your
> idea of a really swell sort algorithm is a bubble sort, no matter how
> much optimizing you perform you are going to lose.
>
> even a highly optimized bad algorithm is still a bad algorithm.

Ok, what is a bad algorithm?

	A bubble sort?

	Well, I've used one where I had a small number of items
	to sort, no memory to spare, and irrelevant execution
	time constraints.

	A selection sort?

	I've used this where the number of items was not too
	large and the cost of moving an item was huge.

	An insertion sort?

	I've used this one where the insertion time was
	irrelevant but the cost of memory and code was important.

	A Shell sort?

	I use this sort if the number of items is reasonable and
	the code complexity or memory requirements of an O(n*ln
	n) sort is prohibitive.

	A quicksort?

	Actually my favorite non-memory-hog sort, I use this
	when I have no good reason to favor another sort.

	A heapsort?

	I use this one when I need to be able to use the sorted
	items as a queue.

	A linked-list sort?

	Once upon a time I got very tired of the slowness of the
	UNIX sort program. So I wrote a clone of it. I invented a
	whole new algorithm for it which you could call a
	linked-list merge sort.  It beat every other sort method
	that I tested hands-down.  Its only drawback is the
	memory for the linked list.  (Actually, I'd bet that
	someone invented this sort before me, but at the time,
	I'd never heard of anything like it.)

	A radix sort?

	I just finished coding one of these for a program which
	burns incredible amounts of time on our Sun. It just so
	happens that one critical piece of code sorts items by
	length and the maximum length is easy to compute and
	always small.

	[A sort algorithm with no name.]

	I invented a new hashing scheme; someone else here
	discovered a way to use it as a sorting method. How about
	a constant time sort that uses a fair amount of memory?
	It is often just the right tool for in-memory sorting of
	small keys.

In each case, the algorithm selected was the right one for the
job. None of these, not even the bubble sort, is a *bad*
algorithm; at worst, one could call them inappropriate.

Now that I have displayed my credentials :-), let me elaborate
on my previous posting. Writing a program has many steps.  Among
them are choosing an algorithm and coding the algorithm. Failure
to do either results in the non-existence of the program. Doing
either badly results in a poorer program than could have been.

It is not legitimate to say that choosing a better algorithm is
more important than implementing it efficiently.  Nor the
reverse.  To emphasize this: a programmer who can only select
among algorithms (assuming this to be possible, actually, it
isn't) is as poor a programmer as one who who can only code
efficiently.  And the best programmers have a large toolbox of
algorithms and a large selection of good coding techniques.

Most people who defend the notion that there are good and bad
algorithms try to treat algorithms as if they exist in a vacuum.
This is Idealist hokum. Algorithms exist to serve purposes, the
quality of a given algorithm is simply how well it serves a
given purpose and thus can not be spoken of independently of
said purpose.

gwyn@brl-smoke.ARPA (Doug Gwyn ) (07/11/88)

In article <449@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>	Once upon a time I got very tired of the slowness of the
>	UNIX sort program. So I wrote a clone of it. I invented a
>	whole new algorithm for it which you could call a
>	linked-list merge sort.  It beat every other sort method
>	that I tested hands-down.  Its only drawback is the
>	memory for the linked list.  (Actually, I'd bet that
>	someone invented this sort before me, but at the time,
>	I'd never heard of anything like it.)

You don't actually need linked lists for this method; arrays of
pointers to the records will do fine.  That is known as "list merge"
sorting (see Knuth vol. 3), and it is my favorite method.

I, too, reimplemented the sort utility, although my motivation was
to sort binary records instead of text lines.  As I iteratively
improved it, it evolved into purely merge sorting, using external
"run" files when necessary and sorting the internal buffer via
list merge (originally this was done via heapsort).  Someday I
hope to polish this up and contribute it to the sources newsgroup.

>Most people who defend the notion that there are good and bad
>algorithms try to treat algorithms as if they exist in a vacuum.
>This is Idealist hokum. Algorithms exist to serve purposes, the
>quality of a given algorithm is simply how well it serves a
>given purpose and thus can not be spoken of independently of
>said purpose.

Well said, but this should be tempered with the realization that
code often survives its original intended purpose and gets pressed
into service for unanticipated uses.  Therefore it often pays off
in the long run to select an algorithm that works well enough to
not cause problems when this happens.  Some sorting methods, e.g.
bubble sort, are a real catastrophe when scaled up, so usually it
is wise to pick a method that scales better.

One of my pet peeves is code that assumes that everything will
always fit into main memory, when it would have been only slightly
more trouble to code a method that doesn't rely on that assumption.
Of course, sometimes there is a huge performance difference, but
not always -- especially when you consider that relying on virtual
memory amounts to surrendering application control over external
buffering to the operating system, which doesn't know what the
best buffering strategy for the particular application is.

g-rh@cca.CCA.COM (Richard Harter) (07/11/88)

In article <449@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>In article <3693@rpp386.UUCP>, jfh@rpp386.UUCP (John F. Haugh II) writes:
>> In article <416@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>> >In article <3401@rpp386.UUCP>, jfh@rpp386.UUCP (John F. Haugh II) writes:
>> >> i don't know what value microefficiency has this week, but in general,
>> >> writing good solid algorithms is what is important.

>> >        There is no argument against the idea that a good
>> >algorithm is the proper base from which to start (though there is
>> >certainly room for discussion as to what constitutes a good
>> >algorithm). But, a good algorithm is ONLY THE BEGINNING.

>> er, just the same way the in order to build a strong building you have
>> to have a solid foundation.  but that is, after all, only the beginning.
>> if i write a sort routine with O(n*ln n) and you write one O(n**2),
>> how much ``micro optimization'' is it going to take to outperform my
>> poorly implemented, but superior time complexity, algorithm?  if your
>> idea of a really swell sort algorithm is a bubble sort, no matter how
>> much optimizing you perform you are going to lose.

>> even a highly optimized bad algorithm is still a bad algorithm.

Wells discusses at length the point that 'good' and 'bad' are context
dependent terms when applied to algorithms, with particular reference
to sorting algorithms.

What he did not do is point out that John's comments were a nonresponsive
false dichotomoty.  The original comment ammounts to:

"Pick the right algorithm and then optimize it."

John's response amounts to:

"An optimized bad algorithm is worse than a good algorithm."

When it is stated this baldly it is fairly clear than John has simply
ignored the point being made and has posed a false dichotomy.  John's
comment is faulty on two grounds.

First, as Wells points out, "good" and "bad"  depend on context; an
O(f(n)) evaluation is inadequate.  Secondly, the original point is
important -- once one has chosen the "right" algorithm, it is still
important (and profitable) to implement it efficiently.  As we all
know, there are a host of tradeoffs to consider, such as maintainability,
cost of constructing an optimized version, the return from optimization,
and, of course, the cost of analyzing the tradeoffs.  

-- 

In the fields of Hell where the grass grows high
Are the graves of dreams allowed to die.
	Richard Harter, SMDS  Inc.

boyne@hplvly.HP.COM (Art Boyne) (07/12/88)

Just to show that algorithms aren't everything, however, a case in point:

About 10 years ago, a project I was tangentially connected with needed to
do a data base sort on a floppy-based microprocessor system.  After
spending a lot of time choosing the algorithm, the guy doing the sort
chose one not very different from that running on the HP 3000 minicomputer.
After tuning the code, the micro/floppy system was able to outperform the
HP 3000 with a hard disk.

Long live optimization!

Art Boyne, ...!hplabs!hplvly!boyne

ebg@clsib21.UUCP (07/13/88)

>>	A linked-list sort?
>>
>>	Once upon a time I got very tired of the slowness of the
>>	UNIX sort program. So I wrote a clone of it. I invented a
>>	whole new algorithm for it which you could call a
>>	linked-list merge sort.  It beat every other sort method
>>	that I tested hands-down.  Its only drawback is the
>>	memory for the linked list.  (Actually, I'd bet that
>>	someone invented this sort before me, but at the time,
>>	I'd never heard of anything like it.)

I "invented" the same sort of algorithm when I was working in Fort
Lauderdale, only it was because I was tired of storing sorted lists
on intermediate files on disk.  I've seen nothing on it either since
then, maybe someone has a reference.

Since it was "invented" in Fort Lauderdale, do we know each other?

	--ed gordon (data systems associates, 513a ridgefield
			circle, clinton, MA 01510)
.

ebg@clsib21.UUCP (07/13/88)

>>	A linked-list sort?
>>
>>	Once upon a time I got very tired of the slowness of the
>>	UNIX sort program. So I wrote a clone of it. I invented a
>>	whole new algorithm for it which you could call a
>>	linked-list merge sort.  It beat every other sort method
>>	that I tested hands-down.  Its only drawback is the
>>	memory for the linked list.  (Actually, I'd bet that
>>	someone invented this sort before me, but at the time,
>>	I'd never heard of anything like it.)

I wrote it in 8080 assembler, not wanting to deal with overflowing to
buckets for a hash sort.  It was written to replace storing of
labels in intermediate files, and sorting to disk.  It's size was
unlimited, considering "heaping" to disk.  Today, with megs of memory
and virtual addressing (especially on UNIX), it would allow virtual
allocation of memory segments.

Similarly, any list can be defeated by having extremely ordered sets
of data, including alphabetic.  Linked list sort by hash would be
slightly slower than straight linked list sort, but eliminate the need
for overflow.

	--ed gordon (data systems associates, 513a ridgefield circle,
			clinton, ma 01510)

zeeff@b-tech.UUCP (Jon Zeeff) (07/13/88)

In article <688@clsib21.UUCP> ebg@clsib21.UUCP (Ed Gordon) writes:
>
>>>	A linked-list sort?
>>>
>>>	Once upon a time I got very tired of the slowness of the
>>>	UNIX sort program. So I wrote a clone of it. I invented a
>>>	whole new algorithm for it which you could call a
>>>	linked-list merge sort.  It beat every other sort method
>>>	that I tested hands-down.  Its only drawback is the

If it's that good, perhaps someone could post a copy to comp.unix.sources,
preferably in a form that is a plug in replacement for qsort().










-- 
Jon Zeeff           		Branch Technology,
uunet!umix!b-tech!zeeff  	zeeff%b-tech.uucp@umix.cc.umich.edu

bill@proxftl.UUCP (T. William Wells) (07/15/88)

In article <4616@b-tech.UUCP>, zeeff@b-tech.UUCP (Jon Zeeff) writes:
) In article <688@clsib21.UUCP> ebg@clsib21.UUCP (Ed Gordon) writes:
) >
) >>>   A linked-list sort?
) >>>
) >>>   Once upon a time I got very tired of the slowness of the
) >>>   UNIX sort program. So I wrote a clone of it. I invented a
) >>>   whole new algorithm for it which you could call a
) >>>   linked-list merge sort.  It beat every other sort method
) >>>   that I tested hands-down.  Its only drawback is the
)
) If it's that good, perhaps someone could post a copy to comp.unix.sources,
) preferably in a form that is a plug in replacement for qsort().

That's me speaking, not Ed Gordon.  However, I might go ahead and
publish it.  There is one minor gotcha: the drawback, not copied
in the message above, is that the routine has got to get memory
for the linked list.  On virtual memory systems, this could cause
my sort to be significantly slower than on non virtual memory
systems.  There is also the overhead of actually allocating the
memory; while it ought not be significant, you never know.  (The
original implementation was on a non virtual system and the
linked list was allocated at the same time as the pointers to
the strings being sorted.)

Do you think I ought to post it over here so that you guys can
attack it before I submit a final version to the archives?  If I
do, I might also send over a qsort (created by disassembling
somebodies, I forget whose, library) that I have.  It might be
interesting to beat on both pieces of code and to compare
benchmarks of the final sort routines.

tps@chem.ucsd.edu (Tom Stockfisch) (07/16/88)

In article <472@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>In article <4616@b-tech.UUCP>, zeeff@b-tech.UUCP (Jon Zeeff) writes:
>) In article <688@clsib21.UUCP> ebg@clsib21.UUCP (Ed Gordon) writes:
>) >>>   [I wrote a much-faster clone of qsort() using linked-lists]
> However, I might go ahead and
>publish it.  There is one minor gotcha: the drawback, not copied
>in the message above, is that the routine has got to get memory
>for the linked list.

Rewrite your interface so that the user must supply the memory for
the linked list and free it himself, so that if allocation/deallocation
is an issue, the user can handle it.
-- 

|| Tom Stockfisch, UCSD Chemistry	tps@chem.ucsd.edu

bill@proxftl.UUCP (T. William Wells) (07/17/88)

In article <255@chem.ucsd.EDU> tps@chem.ucsd.edu (Tom Stockfisch) writes:
> In article <472@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >In article <4616@b-tech.UUCP>, zeeff@b-tech.UUCP (Jon Zeeff) writes:
> >) In article <688@clsib21.UUCP> ebg@clsib21.UUCP (Ed Gordon) writes:
> >) >>>   [I wrote a much-faster clone of qsort() using linked-lists]
> > However, I might go ahead and
> >publish it.  There is one minor gotcha: the drawback, not copied
> >in the message above, is that the routine has got to get memory
> >for the linked list.
>
> Rewrite your interface so that the user must supply the memory for
> the linked list and free it himself, so that if allocation/deallocation
> is an issue, the user can handle it.

That leads to a quandry: should I make it a qsort replacement or
not?  If so, then this is not reasonable.  If not, then it is
less useful.

Then again, I could write it with variable arguments and a global
control variable that determines whether it does the allocation
or whether the caller passes allocated memory.

And that requires conditional compilation for those
who do not want to pull in malloc.

Ugh!

Sigh.

bill@proxftl.UUCP (T. William Wells) (07/19/88)

In article <8238@brl-smoke.ARPA>, gwyn@brl-smoke.ARPA (Doug Gwyn ) writes:
) In article <449@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
) >     Once upon a time I got very tired of the slowness of the
) >     UNIX sort program. So I wrote a clone of it. I invented a
) >     whole new algorithm for it which you could call a
) >     linked-list merge sort.  It beat every other sort method
) >     that I tested hands-down.  Its only drawback is the
) >     memory for the linked list.  (Actually, I'd bet that
) >     someone invented this sort before me, but at the time,
) >     I'd never heard of anything like it.)
)
) You don't actually need linked lists for this method; arrays of
) pointers to the records will do fine.  That is known as "list merge"
) sorting (see Knuth vol. 3), and it is my favorite method.

Actually, my sort method did not use a linked list per-se;
instead, it used an array of pointers into that array, the index
into the array was the index into the items being sorted.

And, while this sort is very similar to the list-merge method,
there are some differences.  You'll see what I mean when I
publish the code.

) I, too, reimplemented the sort utility, although my motivation was
) to sort binary records instead of text lines.  As I iteratively
) improved it, it evolved into purely merge sorting, using external
) "run" files when necessary and sorting the internal buffer via
) list merge (originally this was done via heapsort).  Someday I
) hope to polish this up and contribute it to the sources newsgroup.

This does seem to be the canonical method of doing general
sorts.  Having heard of sorts which do not behave as if they were
this kind of sort, I do wonder whether any netters have any
comments on better ways of sorting large amounts of data?

) >Most people who defend the notion that there are good and bad
) >algorithms try to treat algorithms as if they exist in a vacuum.
) >This is Idealist hokum. Algorithms exist to serve purposes, the
) >quality of a given algorithm is simply how well it serves a
) >given purpose and thus can not be spoken of independently of
) >said purpose.
)
) Well said, but this should be tempered with the realization that
) code often survives its original intended purpose and gets pressed
) into service for unanticipated uses.  Therefore it often pays off
) in the long run to select an algorithm that works well enough to
) not cause problems when this happens.  Some sorting methods, e.g.
) bubble sort, are a real catastrophe when scaled up, so usually it
) is wise to pick a method that scales better.

In general, this is true.  However, it does not change the point
of what I said, it just points out that one had better consider
as part of the purpose while selecting an algorithm the future
uses of the algorithm.  Else, that algorithm will not be suited
to its future use.  As an aside, when I chose the bubble sort,
the items being sorted were entries in an input transaction;
since this was strictly limited by how much the user could type
there was no chance that the bubble sort would become
inappopriate.

) One of my pet peeves is code that assumes that everything will
) always fit into main memory, when it would have been only slightly
) more trouble to code a method that doesn't rely on that assumption.
) Of course, sometimes there is a huge performance difference, but
) not always -- especially when you consider that relying on virtual
) memory amounts to surrendering application control over external
) buffering to the operating system, which doesn't know what the
) best buffering strategy for the particular application is.

This is right in line with one of my pet peeves: the presumption
that, because the power of computers is growing by leaps and
bounds, one can be as wasteful as one cares, and that somehow,
the computers will get powerful enough to run your program in
spite of the waste.  Argh!!!!

By the way, this isn't really C; where ought this kind of
discussion go?  I'd say comp.misc, but I am already embroiled in
an idiot discussion there and do not want to move anything more
over there unless I must.

bill@proxftl.UUCP (T. William Wells) (07/19/88)

In article <30827@cca.CCA.COM> g-rh@CCA.CCA.COM.UUCP (Richard Harter) writes:
) Wells discusses at length the point that 'good' and 'bad' are context
) dependent terms when applied to algorithms, with particular reference
) to sorting algorithms.
)
) What he did not do is point out that John's comments were a nonresponsive
) false dichotomoty.  The original comment ammounts to:
)
) "Pick the right algorithm and then optimize it."
)
) John's response amounts to:
)
) "An optimized bad algorithm is worse than a good algorithm."

Thanks for writing the second half of my article.  :-) I had
originally intended to point out just that but I got carried away
ranting about the false assumptions inherent in the usual notion
of "good" algorithms.  I even started the article out with this
summary line:

)Summary: ever heard of the "false dichotomy"?

Too bad I didn't keep it in mind while writing the article.  I
could have made it much shorter!

bill@proxftl.UUCP (T. William Wells) (07/20/88)

I wrote:

> That leads to a quandry: should I make it a qsort replacement or
> not?  If so, then this is not reasonable.  If not, then it is
> less useful.
>
> Then again, I could write it with variable arguments and a global
> control variable that determines whether it does the allocation
> or whether the caller passes allocated memory.
>
> And that requires conditional compilation for those
> who do not want to pull in malloc.
>
> Ugh!
>
> Sigh.

Braindead programmer!  Don't you know better than to make such
idiot statements in public!  Go back to your kennel!  :-)

Sorry folks.  If I used drugs (other than caffeine), I'd be
wondering what I was on when I wrote that.  Several of you
corrected me for this idiocy; thanks for the input.

To set the record straight, what I am going to do is to recode my
linked-list sort and my qsort as something that will be
appropriate for placing in a library and which will not embarrass
me unduly.  (It was, after all, several years ago that I wrote
it; I'd like to think that I can do a better job today.)

I'll be posting three routines and a program.  The three routines
will be twwsort, a linked-list sort wherein you provide the
memory; msort, a linked-list sort with an interface identical to
qsort; and qsort, my version of qsort.  The program will be a
test program for it.  This is likely to take several weeks, so
don't hold your breath.

I have not received any response to whether I ought to post it
over here for comment, comparison, and benchmarking, before
sending it to the appropriate source group.  So ought I or not?
E-mail is fine for responses pro or con.