[comp.archives] comp.archives digest 12/1/88

comparc@twwells.uucp (comp.archives) (12/01/88)

Well, I've decided to try doing the non-database postings as a
digest. We'll see how this works.  Here is a summary of what's in
this digest:

	A request for bibliographical databases.
	A suggestion that I get a domain name.
	Some more info about netlib.
	Some info on finding TeX related stuff.
	A request for uucp access to a comp.binaries.ibm.pc archive.
	Some discussion on digesting.
	Some discussion about mail based servers for comp.archives.
	Some discussion about hypertext, news archives, and news readers.

As suggested, I'm going to repost all the database information I have
about archives, just to keep everyone awake. :-) That will come in a
separate posting, right after my informational postings.

--------
From: emv@starbarlounge.cc.umich.edu (Edward Vielmetti)
Subject: collecting bibliographies
Date: Sun, 20 Nov 88 19:08:40 -0500
Message-Id: <8811210008.AA01850@starbarlounge.cc.umich.edu>
X-Postal-Address: Computing Center, 535 W. William, Ann Arbor MI  48109
X-Phone: (313) 936-2653

I am very impressed with comp.archives, keep up the good work.  This is
what usenet is all about.

I'm working on collecting bibliographical databases, hopefully in a format
usable directly by Pro-Cite or Professional Bibliographic Systems.  In particular,
I'm interested in collections of economics, political science,
artificial intelligence, computer and human networking, and population
biology.  I'll take them in any form that I can get and share them
as widely as possible.  Right now I can only handle 'refer' format
and I only have what I've typed in.

- 'refer' is really old.  Pro-cite doesn't exist yet for unix.
  What other bibliographical formats can I expect to see ?

- Are there tools that people already use for munging things from
  one format to another?

Thanks.

--Ed
Edward Vielmetti, U of Michigan Computing Center electronic mail group

--------
From: ulmo@ssyx.UCSC.EDU (Brad Allen)
Subject: Re: Administrivia 11/20/88
Date: Sun, 20 Nov 88 22:49:04 PST
Message-Id: <8811210649.AA23436@ssyx.ucsc.edu>

	[He sent me the rfc on digesting. Thanks.]

-brad allen
{ssyx.ucsc.edu|splat.aptos.ca.us|comix|cencom|star24}!ulmo

p.s. have you registered your Internet domainized hostname yet?
US domain is geographical and free in itself;
uunet charges $35 for them to help you, worth it I think.
UUNET is a very reliable link source, I think you'll gawk at the phone
bills but be happy with the reliability.

	[No, I haven't registered my site yet.  I don't have a modem,
	I'm directly connected to my work machine. This will change,
	hopefully by the end of the year.]

--------
From: w-colinp@microsoft.UUCP
Subject: Re: Administrivia 11/20/88
Message-Id: <8811220440.AA02612@beaver.cs.washington.edu>
Date: Mon Nov 21 20:04:56 1988
In-Reply-To: <198@twwells.uucp>
Confusion: Microsoft Corp., Redmond WA

Thanks for the info about the date format; I was unsure.

Sorry I didn't make it clear - I was suggesting an access type, like
"uucp" and "ftp" for "netlib" and other servers using the same software.
Other mail-based servers with different interfaces would not be in
the same category.  I know, however, that there are several sites running
netlib for various things.  Usually they hack it up to batch and regulate
mail to make it easiest for the regular users of the machine, but the
interface remains the same.

Is this clearer?
--
	-Colin (microsof!w-colinp@sun.com)

	[I'll tell you what: you send me the description of the netlib
	server software, or tell me where to find it, and I'll add the
	type to the format.]

--------
From: gst@wjh12.UUCP (Gary S. Trujillo)
Subject: Re: Looking for TeX/LaTeX archive
Date: 27 Nov 88 03:27:48 GMT
Message-Id: <321@wjh12.harvard.edu>
Keywords: TeX,LaTeX,archive,wanted
Organization: Harvard University, Cambridge MA

In article <568@gt-eedsp.UUCP> jensen@dsp.ee.gatech.edu (P. Allen Jensen) writes:
> What systems on the internet are archive sites for TeX/LaTeX and
> what versions and fonts are available from them ?

Well, the following applies mostly to AT&T versions, but you might find it
interesting nonetheless.

>From spdcc!husc6!ukma!cwjcc!cwsys3.cwru.Edu!ferencz Tue Nov 22 18:43:21 EST 1988
Path: gnosys!spdcc!husc6!ukma!cwjcc!cwsys3.cwru.Edu!ferencz
>From: ferencz@cwsys3.cwru.Edu (Don Ferencz)
Newsgroups: comp.sys.att,comp.terminals.tty5620,comp.unix.questions
Subject: Locations of TeX Previewers for 7300, 5620
Message-ID: <77@cwjcc.CWRU.Edu>
Date: 22 Sep 88 19:28:32 GMT
Sender: news@cwjcc.CWRU.Edu
Reply-To: ferencz@cwsys3.cwru.Edu (Don Ferencz)
Organization: CWRU Dept of Systems Engineering
Lines: 38

Hello again.

About a week ago, I asked if anyone knew where I could find some TeX
previewers, in particular for the 5620 DMD and the PC7300.  I got quite
a few responses about where to find some files, as well as a lot more
people asking me to disclose the whereabouts of this great stuff.
No problem!  (Please note I haven't checked the validity of all the
locations and/or the code itself.)

Harvard Townsend (harv@ksuvax1.cis.ksu.edu) reports a complete
release of TeX, LaTeX, and a previewer for the AT&T 7300 Unix PC
available from hotel.cis.ksu.edu (129.130.10.12) via anonymous FTP.
Several files are there, all in "cpio -c" format.  Look in
pub/toolchest/ctex.  I've checked this stuff; there's a lot there!

Eric Herrin (eric@ms.uky.edu) also has a previewer for the 3B1
(almost a 7300 ;-) via anonymous FTP from e.ms.uky.edu in
ftp/archive/text/TeXpreviewer as preview.tar.Z.  Eric warns,
however, that this one has no documentation.

Eduardo Krell (ekrell@ulysses.att.com) has given me the location
of a previewer for the 5620 DMD.  Again, just anonyous FTP to
cs.washington.edu and grab pub/dmd.tar.Z.

Finally, Solveig Whittle (att!ttrde!sol) has a previewer for
the 630 MTG (the successor of the 5620 DMD).  Drop him a note
for some info on this.

Thanks to everyone who helped out on this!  For those who will
brave the Internet in search of a better TeX previewer, I
salute you!

===========================================================================
| Don Ferencz                       |  "All the world's indeed a stage\   |
| ferencz@cwsys3.cwru.EDU           |   And we are merely players\        |
| Dept of Systems Engineering       |   Performers and portrayers"        |
| Case Western Reserve University   |     -- Rush                         |
===========================================================================

--
	Gary Trujillo
	(harvard!wjh12!gst)

	[Thanks for the posting. While I take note of relevant
	postings while I read the news, I don't happen to read those
	groups.]

--------
From: tomh@proxftl.UUCP (Tom Holroyd)
Subject: looking for an archive site for comp.binaries.ibm.pc
Date: 30 Nov 88 14:25:10 GMT
Message-Id: <1070@proxftl.UUCP>
Keywords: archive comp.binaries.ibm.pc
Organization: Proximity Technology, Ft. Lauderdale

I'm looking for a site that archives comp.binaries.ibm.pc.
I want to do anonymous uucp to request articles.
Does anybody know of such a site?

Tom Holroyd
UUCP: {uflorida,uunet}!novavax!proxftl!tomh

The white knight is talking backwards.

--------
From: sdp@sdp.UUCP
Subject: Re: Administrivia 11/20/88
Date: Wed Nov 30 09:28:49 1988
Message-Id: <8811302116.AA04185@uunet.UU.NET>
In-Reply-To: <198@twwells.uucp>
Organization: Intel Corp., OMSO UNIX Development, Hillsboro, OR

In article <198@twwells.uucp> you write:
[ ... ]
>I could make it official and post everything other than the database
>messages in digests.  If I did so, I'd have to have someone tell me
>what the rules are for making a digest. (Should I follow the form
>used in comp.sys.sun?) Does anyone have any feelings about this, pro
>or con? Any information?

You've probably read comp.risks.  Peter Neumann uses a digest form for that
group that is pretty nice.  It uses Subject: lines that rn can find (^G),
and has a table of contents at the top.  My only problem with his form is
that the subject of the digest is "RISKS-DIGEST vol xxx" which makes each
subject unique.  If each subject were always the same (i.e.  just
"RISKS-DIGEST"), the succeeding digest could be found by rn with ^N.  This
is more of an issue for you that for Peter, since there are no articles in
comp.risks other than digests.  For comp.archives this is not the case.

	[I'm not going to make the subject lines all be the same;
	that makes it more difficult to see which messages go with
	what. However, I believe that it would be relatively easy to
	make a kill file to filter out the postings you don't want to
	read since, if I'm doing this stuff as digests, there will
	only be a few kinds of articles.]

An unrelated subject ...

I have been looking into hypertext lately.  Specifically, the feasability of
adding hypertext features to USENET news readers.

	[Something I've long wished to see. Let me know how your
	research goes.]

						   A highly desireable
feature is the ability to retreive an expired article.  [The answer to my
next question(s) may have already been posted, but I haven't had time lately
to read news thoroughly.]  Will the support package you're evidently
developing for use in conjunction with this group include a (probably
mail-based) archive server?

	[Probably. I'm writing the software to let one automatically
	maintain the database; I have some ideas for a server, but
	that's on the distant horizon.]

			     How hard would it be to make that server
retrieve archived news articles based on message-ID's?  Will your server
system handle requests for things that are not online (i.e.  "archived"
archives.)?  Are there plans to minimize human interaction in resolving
those requests (i.e an archive request queueing mechanism)?

	[There would be two parts to a mail based server; the mail
	handler receiver/sender and the server proper.  If I wrote
	the software, the former would be a separate program; the
	second part could be replaced with whatever you want, since
	it would be a separate program.]

My thought is that if everyone always archives articles originating from
their site, and the convention that message-ID's have the form
<unique-string@machine.domain> is made a requirement, then tracing back a
link to an old article is possible with no human intervention (I'm ignoring
the fact that with the present network it would take days to traverse the
link, and the reading session would span multiple user login sessions.
Hopefully this will change with newer network technology, but there will
always be the possibility that the request cannot be resolved immediately.).

	[Having everyone archive their posted articles is a nice
	idea. However, I strongly suggest that you assume that this
	will never happen. Because it won't. I say this with
	confidence because I can just imagine the shrieks from the
	many sysadmins who have much better things to do with their
	time than maintain one more set of data. Especially
	*newsfeed* data. :-)]

It remains to be seen how useful HT features will be in net news.  I
strongly suspect that the usefulness of the links in a newsgroup will be
closely tied to the quality (S/N) of the group.

	[Oh, I'm certain that an intelligent application of HT
	features will be useful; the only real question is how to get
	the links set up without spending too much people time.]

Arrgh, I'm babbling on again.  Well, any feedback will be gratefully
accepted.  Keep up the good work.

Scott Peterson  --  OMSO Software Engineering  --  Intel,  Hillsboro OR

	  uunet!littlei\
  tektronix!reed!foobar >!sdp!sdp     -- or --     sdp@sdp.hf.intel.com
	  psu-cs!foobar/
						  [ a.k.a. nd31@psuorpn ]

--------
End of comp.archives digest

---
Bill
{uunet|novavax}!proxftl!twwells!bill

send comp.archives postings to twwells!comp-archives
send comp.archives related mail to twwells!comp-archives-request