[comp.graphics] Yet more elementary dross.. a suggestion

cdshaw@alberta.UUCP (Chris Shaw) (03/28/88)

I must say I'm getting rather tired of coming across articles that ask simple
questions. Seems to indicate that people are posting the question as soon as
it comes to mind, as opposed to going to the library or bookshelf to get 
the appropriate book. So... why not create a new newsgroup..

comp.graphics.newuser... which would have a whole bunch of answers to the
standard questions which come around again and again and again and again and
again and again and again and again.

So what do you think? Dumb idea? Criminally smart idea? Tell me.

-- 
Chris Shaw    cdshaw@alberta.UUCP (via watmath, ihnp4 or ubc-vision)
University of Alberta
CatchPhrase: Bogus as HELL !

garry@batcomputer.tn.cornell.edu (Garry Wiegand) (03/29/88)

In a recent article cdshaw@pembina.UUCP (Chris Shaw) wrote:
>I must say I'm getting rather tired of coming across articles that ask simple
>questions. Seems to indicate that people are posting the question as soon as
>it comes to mind, as opposed to going to the library or bookshelf to get 
>the appropriate book...

I think it's entirely appropriate for people to ask simple questions
here (as long as somebody is still enjoying answering them.)

Telling people to "go to the library" is often not very kind or helpful.
This business we're in hasn't been around very long, and the reference
works just aren't very solid yet. My own experience with the literature
has been that:

A) It's hard work to find stuff. As far as I know, the ACM computer
   literature index died some years ago. What I generally do when I
   have a new topic to research, and I want to find the fresh work, is 
   to pull all the recent issues of all the relevant journals off the
   shelf and start thumbing through them. I hope to find a "related"
   article and then work the references back. Pretty tedious.

B) Many elementary algorithms which you would think "of course"
   are written down and available actually aren't. Oftentimes there's
   bits and pieces of things scattered around all over, but no-one
   ever got around to putting it all together sensibly and coherently 
   before everybody lost interest in the subject. The hidden-line/
   hidden-surface literature, for example, is a mess.

C) Often when you do find an algorithm, the author wrote it out in
   narrative English. It can be amazingly hard to figure out what
   in the world the person meant to say! (There's a certain value
   in this game to making your solution *appear* to be nice and
   simple & elegant :-(.  

   It can be even harder to figure out whether it's worth figuring out - 
   whether the paper in your hand actually has some major flaw or missing 
   improvement.

D) My experience has been that even the nicely pseudo-coded algorithms 
   in the classics (Newman and Sproull, Rogers, etc) are sometimes not
   quite right and/or just plain wrong. For example, all but one of the 
   RGB/HLS conversion algorithms I've ever seen choose a bad method, 
   and then a *lot* of them proceed to add bugs to the bad method 
   in the course of writing it up. [I don't want to be too harsh on the 
   authors: it's really hard work to make a published detailed algorithm 
   "perfect".]

E) Finally, there's always some residual worth in rehashing "old"
   subjects - often they're "old" because people got bored with them,
   not because the "best" solution had actually been reached. For
   example, I recall there's a tweak you can do to Liang/Barsky (ie, 
   parametric) clipping that makes it run a bit faster than the published 
   version. And we've noticed that even old Bresenham's algorithm is 
   *not* the absolute fastest way to fill a Bresenham line on some 
   machines. The silliest things can still allow room for improvement.

Thanks for letting me ramble. Please be kind to new (and old) readers
asking simple questions?

garry wiegand   (garry@oak.cadif.cornell.edu - ARPA)
		(garry@crnlthry - BITNET)

geoff@desint.UUCP (Geoff Kuenning) (04/02/88)

In article <4219@batcomputer.tn.cornell.edu> garry@oak.cadif.cornell.edu
(Garry Wiegand) makes many excellent points for continuing the discussion of
"elementary" issues.  Well said, Garry!

I'd like to add one thing to his points:  "going to the library" is not
necessarily even a viable option for many people.  Here in LA, for
example, one would think that there are lots of places to go for
graphics publications.  However, important parts of UCLA's collection
only go back five years;  USC has similar deficiencies.  The best
graphics collection is apparently at UC Irvine;  depending on where you
live, that can be a 2 to 3 hour drive from home.  For someone who
doesn't live in a big city with lots of universities, I could imagine that
going to the library for graphics literature could even become a two-day
project.
-- 
	Geoff Kuenning   geoff@ITcorp.com   {uunet,trwrb}!desint!geoff

ksbooth@watcgl.waterloo.edu (Kelly Booth) (04/04/88)

In article <1705@desint.UUCP> geoff@desint.UUCP (Geoff Kuenning) writes:
>I'd like to add one thing to his points:  "going to the library" is not
>necessarily even a viable option for many people.  Here in LA, for
>example...

Did you try the LA local ACM SIGGRAPH?  Some of the locals (LA, Bay Area,
NY, and New England) have been around for quite some time and have as members
people who have been in graphics many years.

snyder@boreas.steinmetz (Snyder) (04/05/88)

In article <1705@desint.UUCP> geoff@desint.UUCP (Geoff Kuenning) writes:

>I'd like to add one thing to his points:  "going to the library" is not
>necessarily even a viable option for many people.  Here in LA, for
>example, one would think that there are lots of places to go for
>graphics publications.  However, important parts of UCLA's collection
>only go back five years;  USC has similar deficiencies.  The best
>graphics collection is apparently at UC Irvine;  depending on where you
>live, that can be a 2 to 3 hour drive from home.  For someone who
>doesn't live in a big city with lots of universities, I could imagine that
>going to the library for graphics literature could even become a two-day
>project.
>-- 
>	Geoff Kuenning   geoff@ITcorp.com   {uunet,trwrb}!desint!geoff

Ever hear of inter-library loans? I'm not from California, but many libraries
here on the east coast will forward material, or in the case of journal
articles, copies of the articles. If you can find a reference (most university
libraries should be able to help you with a literature search also),
and your library doesn't have the item, chances are they can get it for
you.

Some experience in the past is that this takes on the order of a week,
maybe two. And you get the information first hand. Using news, it probably
will also take a week, and you will either get second hand information or
a reference to what you should have had anyway.

I'm not against asking questions on the network. I think its good and
one of the purposes of this newsgroup. But making some effort to find
the answer to your question yourself is probably more beneficial to you
and less annoying to the rest of the network.

Derek Snyder

geoff@desint.UUCP (Geoff Kuenning) (04/09/88)

In article <10253@steinmetz.steinmetz.ge.com> snyder@boreas.UUCP (Derek
Snyder) writes:

> Ever hear of inter-library loans? I'm not from California, but many libraries
> here on the east coast will forward material, or in the case of journal
> articles, copies of the articles. If you can find a reference (most university
> libraries should be able to help you with a literature search also),
> and your library doesn't have the item, chances are they can get it for
> you.

As it happens, all of the libraries in the UC system, plus some others like
USC, are tied together this way;  there's even a computer database (I think
it's called Euclid) for searching through all libraries at once.  Using this,
it is possible to locate all articles with certain keywords in the title, or
by a certain author, plus subject-categorized searching.

Assuming I live in a built-up area like LA (I do), the system then works
like this:

	(1) Drive 45 minutes to UCLA (the nearest one for me), and use
	    Euclid to look up the keywords I can think of.  Pay $100-$200
	    for a library card (or, cheaper, enroll in any extension course
	    and pay $8.75) on the spot, and order the 5-50 articles that
	    the search discovers.
	(2) Wait about a week while these things show up.  Of course, they
	    won't show up all at once (some will be out), and the library
	    won't hold them forever until you come to get them.  So you
	    will have to make 2-3 more trips to see what you ordered.
	(3) Of course, keywords and subject categorizations always miss
	    a lot of stuff, as well as producing a lot of chaff.  (The
	    last time I researched something, my first search missed an
	    *entire journal* devoted exclusively to the subject I was
	    interested in.)  So you will have to go through the references
	    and develop a supplementary list of things to order.  Often,
	    this will be larger than the first.
	(4) Repeat steps 2-3 several times, resulting in a total of 5-10
	    trips to your not-so-nearby library, to get a comprehensive
	    list of references on the subject.  Each trip takes 1.5 hours
	    of your time, not counting time spent in the library.
	(5) Decipher all the articles you collected.  Many will be mere
	    expansions on previous articles, saying "we assume the reader
	    is familiar with the Foo algorithm and notation", so that you
	    have to track down and read a 1965 article before you can
	    figure out the thing you have isn't relevant.
	(6) Having selected a promising algorithm or algorithms, repeat
	    steps 2-3 at least once more, this time getting all copies
	    of the next year or two of the relevant journal after the
	    article appeared, so you can search for corrigenda and letters
	    entitled "A Note on the Foo Algorithm."  Typically, these
	    items aren't indexed, yet they are often of critical importance.
	(7) Now the fun begins.  Implementing the algorithm can take weeks
	    to months, depending on complexity and on how well the paper
	    you selected describes it.  Ever try to produce decent C
	    code for a 1975 Fortran-specified algorithm?  And that's
	    the easy case;  it's even more fun working with a paper
	    translated from a foriegn language that never gave code in
	    the first place.

I know whereof I speak;  I have researched more than one subject
exhaustively, starting essentially from scratch.  The subject I covered
most thoroughly literally took me many months and occupied several hundred
hours of my time.

Given the difficulties of this (which are badly compounded if you
aren't close to a good university library), is it surprising that people
ask the net?  A simple net broadcast has a good chance of reaching a person
who makes a specialty of the field you are investigating;  that person often
can point you directly at the 2 or 3 really important works on the subject.
An example is compression:  there are probably 50 papers in the field, but
all you really need to read is Lempel-Ziv.  Furthermore, there is a good
chance that you will discover somebody who has working code and is willing
to send it to you.

> But making some effort to find
> the answer to your question yourself is probably more beneficial to you
> and less annoying to the rest of the network.

Depends on whether you want to know about the subject, or just working code.
If I plan a PhD thesis on the subject, I'd agree.  But if the problem
is incidental to my main goals, I'm interested in a solution, and the
quickest way is often the net.
-- 
	Geoff Kuenning   geoff@ITcorp.com   {uunet,trwrb}!desint!geoff