[comp.binaries.ibm.pc.d] Source code newsgroup for MS-DOS

nather@ut-sally.UUCP (Ed Nather) (05/26/88)

In article <3186@bsu-cs.UUCP>, dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
> 
> Better still, we need comp.sources.msdos.  Why doesn't somebody propose
> it?  The net population likes sources, so it should have no trouble
> getting approved, and it might even encourage more source postings.
> -- 

I second the motion.  Before the Great Net Reorganization we had a group
for MS-DOS sources which was extremely popular -- some chaff, but a fair
amount of wheat, too.  Now MS-DOS sources are posted to a lot of different
newsgroups, many that are inappropriate, or are not posted at all.

As an example, I have collected or written a large number of Unix-like
commands that work under MS-DOS and provide a Unix-like environment on
a PC, since I go back and forth between it and 4.3BSD on a Vax.  (The
collection and its design are described in a chapter in the forthcoming
book "The MS-DOS Papers," published by the Waite Group.)  I intend to
place all of the sources in the public domain, under the name "PCnix" --
not to be confused with "Picnix" which is shareware and for which source
code is not provided.

I guess I'll have to round up a few BBSs willing to "carry" it -- but I'd
also like to make it available on the net, since some of the collection
came to me that way.  I'd like to return the favor.

If the newsgroup Rahul suggests is actually formed, I'll post PCnix to it.

-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather@astro.AS.UTEXAS.EDU

randy@umn-cs.cs.umn.edu (Randy Orrison) (05/27/88)

In article <11803@ut-sally.UUCP> Ed Nather (nather@ut-sally) wrote:
|In article <3186@bsu-cs.UUCP> Rahul Dhesi (dhesi@bsu-cs) wrote:
|> Better still, we need comp.sources.msdos.
|I second the motion.
It will have my vote when votes are being counted!

|As an example, I have collected or written a large number of Unix-like
|commands that work under MS-DOS and provide a Unix-like environment on
|a PC...
If you don't find a place to post them, could you mail them to me?

	-randy
-- 
Randy Orrison, Control Data, Arden Hills, MN		randy@ux.acss.umn.edu
(OSF: Just say NO!)		    {ihnp4, seismo!rutgers, sun}!umn-cs!randy
----	"I consulted all the sages I could find in Yellow Pages,
----			but there aren't many of them."			-APP

w8sdz@brl-smoke.ARPA (Keith B. Petersen ) (05/27/88)

I don't agree that another newsgroup is needed for sources.  There is no
need to post clear text when we can package related files in ARCs and
reduce the chance of getting errors (because of the built-in CRC
checking used in ARC programs).

Many of the programs already distributed as ARCs contain full source
code.  The fact that they also contain executables is a plus for the
users who don't have the compiler needed.  At least they can try the
program and if they want to make a change then they will have the
incentive to purchase the compiler.
-- 
Keith Petersen
Arpa: W8SDZ@SIMTEL20.ARPA
Uucp: {bellcore,decwrl,harvard,lll-crg,ucbvax,uw-beaver}!simtel20.arpa!w8sdz
GEnie: W8SDZ

jpn@teddy.UUCP (John P. Nelson) (05/27/88)

In article <7978@brl-smoke.ARPA> w8sdz@brl.arpa (Keith B. Petersen (WSMR|towson) <w8sdz>) writes:
>I don't agree that another newsgroup is needed for sources.  There is no
>need to post clear text when we can package related files in ARCs ...

As has already been discussed several times, this is a BAD IDEA.  Arc'ing
text files (program source files) causes significantly HIGHER transmission
costs for most usenet sites!  This is because news is often compressed
before transmission:  Ascii files compress to a much smaller size than
ascii files that have been arc'ed (compressed), then uuencoded.

This restriction is not true for BINARY files:  only ascii text.  Please
use shar format or other cleartext packaging method for source!
-- 
     john nelson

UUCP:	{decvax,mit-eddie}!genrad!teddy!jpn
smail:	jpn@genrad.com

nather@ut-sally.UUCP (Ed Nather) (05/27/88)

In article <7978@brl-smoke.ARPA>, w8sdz@brl-smoke.ARPA (Keith B. Petersen ) writes:
> I don't agree that another newsgroup is needed for sources.  There is no
> need to post clear text when we can package related files in ARCs and
> reduce the chance of getting errors (because of the built-in CRC
> checking used in ARC programs).
> 
> Many of the programs already distributed as ARCs contain full source
> code.  [...]

Right now, there is *no* group to which an author can post MS-DOS sources,
whether packaged with executables or not, where other PC users can find
them -- other than comp.sources.misc, which is just as inappropriate for
executables.  Result: they get posted in comp.sys.ibm.pc or other news
groups that are not normally archived.

I agree the best solution is to post ARCed files that have source
and executables (*and* clear text documentation) included.  Maybe the name
of the group is ill-chosen.  Perhaps comp.binaries.ibm.pc could evolve
into a more appropriately named group.  One suggestion made was 
"comp.code.msdos", thus obviously including sources and binaries -- docs
would have to be "understood."  And non-IBM users of MS-DOS would be
covered as well.

Names are funny -- they can make a lot more difference than one might suspect.
If my parents had named me "Flash," as I suggested to them, I might have been
a hero instead of a hacker ...

-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather@astro.AS.UTEXAS.EDU

pre1@sphinx.uchicago.edu (Grant Prellwitz) (05/28/88)

In article <4827@teddy.UUCP> jpn@teddy.UUCP (John P. Nelson) writes:
>In article <7978@brl-smoke.ARPA> w8sdz@brl.arpa (Keith B. Petersen (WSMR|towson) <w8sdz>) writes:
>>I don't agree that another newsgroup is needed for sources.  There is no
>>need to post clear text when we can package related files in ARCs ...
>
>As has already been discussed several times, this is a BAD IDEA.  Arc'ing
>text files (program source files) causes significantly HIGHER transmission
>costs for most usenet sites!  This is because news is often compressed
>before transmission:  Ascii files compress to a much smaller size than
>ascii files that have been arc'ed (compressed), then uuencoded.
>
>This restriction is not true for BINARY files:  only ascii text.  Please
>use shar format or other cleartext packaging method for source!
>-- 
>     john nelson
>
>UUCP:	{decvax,mit-eddie}!genrad!teddy!jpn
>smail:	jpn@genrad.com

I have to disagree here.  While it may (or may not be) more efficient to post
clear text from the standpoint of a singel file, this advantage is totally
lost if errors creep into the posting.  While this may not be a problem with
some of the talk groups, it is disasterous when dealing with a couple hundred
KB of source code.  If someone is unable to get a packege to compile they will
spend a lot of time trying to determine if it is their fault, a true bug,
incompatabililties, or errors that crept in during transmission.  It would be
(in my humble opinion:^) much better to take out one of these variables by
posting in a format that has inherent error checking, not to mention a date
and time stamp so we can tell for sure if we have the most recent version.  I
think that the cost to the net of the repostings of suspected bad postings,
not to mention the cost in terms of time to the employers whose employees are
trying to figure out what went wrong with a posting, would be more than the
extra cost of transmitting arced text files.

The above are my own opinions.  They are also those of my employer (me).
	Grant Prellwitz
	Prellwitz Computing Services


-- 
=====================Grant Prellwitz==========================
!ihnp4!gargoyle!sphinx!pre1          pre1@sphinx.UChicago.UUCP 
76474,2121 (CIS)                                    pre1 (BIX)  
!ihnp4!chinet!pre1    contents sole responsibility of poster.

jfh@rpp386.UUCP (John F. Haugh II) (05/28/88)

In article <4827@teddy.UUCP> jpn@teddy.UUCP (John P. Nelson) writes:
>In article <7978@brl-smoke.ARPA> w8sdz@brl.arpa (Keith B. Petersen (WSMR|towson) <w8sdz>) writes:
>>I don't agree that another newsgroup is needed for sources.  There is no
>>need to post clear text when we can package related files in ARCs ...
>
>As has already been discussed several times, this is a BAD IDEA.  Arc'ing
>text files (program source files) causes significantly HIGHER transmission
>costs for most usenet sites!

generally this is true.  it is possible however to suppress
file compression using the more reasonable arc commands.  if
you are sending out files which have been batched into arc
files that may suffer from being re-compressed, simply don't
have the files compressed when they are placed into the arc
archive.

- john.
-- 
John F. Haugh II                 | "If you aren't part of the solution,
River Parishes Programming       |  you are part of the precipitate."
UUCP:   ihnp4!killer!rpp386!jfh  | 		-- long since forgot who
DOMAIN: jfh@rpp386.uucp          | 

w8sdz@brl-smoke.ARPA (Keith B. Petersen ) (05/28/88)

When Usenet can guarantee error-free and non-truncated transmission of clear
text files  I will agree to posting clear text files.  Until that day
arrives (is anyone working on it?) I will continue to post them as ARC
files in the comp.binaries.ibm.pc newsgroup.
-- 
Keith Petersen
Arpa: W8SDZ@SIMTEL20.ARPA
Uucp: {bellcore,decwrl,harvard,lll-crg,ucbvax,uw-beaver}!simtel20.arpa!w8sdz
GEnie: W8SDZ

W8SDZ@SIMTEL20.ARPA (Keith Petersen) (05/30/88)

John, when Usenet can assure accurate and non-tracted transmission of
newsgroups I will agree to clear-text postings.  Meantime ARCs are the
only way to be sure if it's all there and not garbled.  It's a real
shame that the Unix world can't agree on a common checksum or
crc-checking program.  The Unix "sum" command produces at least two
different results depending on what version of Unix your host is
running.

The real problem is that the news software doesn't have any error
checking to know when /usr/spool/news fills up.  According to some net
wizards that's the real cause of truncated postings.

On the Arpanet our hosts signal the sender that there was an error and
the transfer should be tried again later.

--Keith

jpn@teddy.UUCP (John P. Nelson) (05/31/88)

In article <2143@rpp386.UUCP> jfh@rpp386.UUCP (The Beach Bum) writes:
>In article <4827@teddy.UUCP> jpn@teddy.UUCP (John P. Nelson) writes:
>>As has already been discussed several times, this is a BAD IDEA.  Arc'ing
>>text files (program source files) causes significantly HIGHER transmission
>>costs for most usenet sites!
>
>generally this is true.  it is possible however to suppress
>file compression using the more reasonable arc commands.

Sigh.  It is not the COMPRESSION that causes the problem so much as
it it the UUENCODING!  The REASON that there is a trasmission penalty is
because a .arc file MUST be uuencoded because it is binary, whether it's
component files are compressed or not!

I realize that people like ARC, but all other sources groups live with
the problems and limitations of posting via "shar" archives!  MSDOS
sources are no different!  When the various "sources" newsgroups were
set up, other archiving techniques were considered.  "Shar" is a
lowest common denominator, which is why it is used!  No one to date
has come up with a better archiver that:

    #1 does not generate a BINARY result.
    #2 is as simple to extract files from if you don't have the De-arc
	program.
    #3 has any significant advantages over SHAR.  True, shar doesn't
       have a checksum, but the character count catches 99% of transmission
       errors!
-- 
     john nelson

UUCP:	{decvax,mit-eddie}!genrad!teddy!jpn
smail:	jpn@genrad.com

jpn@teddy.UUCP (John P. Nelson) (05/31/88)

In article <7985@brl-smoke.ARPA> w8sdz@brl.arpa (Keith Petersen <w8sdz>) writes:
>When Usenet can guarantee error-free and non-truncated transmission of clear
>text files  I will agree to posting clear text files.

I'm afraid that Keith is blowing smoke in our collective eyes.  The
most common form of corruption under USENET is either missing or
truncated files.  Shar format detects both of these problems quite
nicely, thank you.  (ARC is not immune to either of these problems,
either, although I will admit that the checksums are nice).

I'm not saying that "shar" is perfect.  Just that we don't yet have
anything better.  All other source groups use "shar":  If we want
MSDOS sources, we should stick to the established convention.
-- 
     john nelson

UUCP:	{decvax,mit-eddie}!genrad!teddy!jpn
smail:	jpn@genrad.com

tneff@dasys1.UUCP (Tom Neff) (05/31/88)

John Nelson makes some interesting points about shar vs. ARC or whatever
for posting sources and binaries to Usenet.  Let me make a couple of points
of my own.

 * Compressing sources into an ARC or ZOO bundle, then uuencoding and
posting, seems to me to be a bad idea.  The arguments over how much you
save in filesize don't take into account the amount of bandwidth required
to actually do the send, or the amount of CPU required to pre- and post-
process everything.  I can already tell when this mini is into its daily
newsfeed decompress - things slow down.  Adding to this unnecessarily
should be avoided.  Also, I think it's important to be able to browse
the contents of a source posting in easily readable ASCII form, before
deciding whether to do anything with it.  I get upset seeing someone post
a source bundle described as "neat C routines to do this-n-that" only
to see a vast, dismal wall of UUENCODE hieroglyphics staring at me. I
have to grab it, cut and paste, decode, de-ARC or whatever, and only then
can I see it's a bunch of freshman functions or the greatest thing since
sliced bread.

 * However, the existing shar mechanism is somewhat deficient by comparison
with what ZOO or ARC offers.  Specifically, you get much better checksumming
("99% of transmission errors" is probably inaccurate, John, but even if
that's how good the character count trick was, 99% is considered a dismal
confidence rating for a software integrity algorithm), explicit control
on file dates/times, and (in the case of ZOO) subdirectory control where
that is important.
   On the other hand, ZOO in its present form does two things which cause
problems: It generates binary compressed files, running into my first
objection above; and it requires a decoder to make any sense of a ZOO file
at all.  The shell archive is self-contained (if you have a Unix shell or
workalike) and the contents are readable ASCII even before unpacking.
Bandwidth is well conserved too, since the 16-bit compress used with uucp
gives excellent numbers on ASCII text.

 * So we have a slight dilemma.  But I propose that Rahul give us a
technical solution!  He can add a new feature to ZOO that *generates
a shell archive* from the input files, instead of a compressed
binary .zoo bundle.  The input files would be assumed, of course, to be
ASCII text.  The special shell archive (call it a ZSA file) would 
be downward compatible with the existing shar format.  So if you don't
have the decoder, you can still extract files by running the thing.  But
it would also contain the additional control information ZOO retains,
including file dates, 32-bit checksums, subdirectory structure and so
on, in ASCII form in special comment lines.  That way, if you do have the
proper decoder, you can run it on the ZSA bundle to extract files with
exhaustive integrity checking, timestamp control, selectivity and whatever
else ZOO gives you.

   This seems to me to give us the best of both worlds.  What do you think?
Rahul?
-- 
Tom Neff			UUCP: ...!cmcl2!phri!dasys1!tneff
	"None of your toys	CIS: 76556,2536		MCI: TNEFF
	 will function..."	GEnie: TOMNEFF		BIX: are you kidding?

jfh@rpp386.UUCP (John F. Haugh II) (06/01/88)

In article <7985@brl-smoke.ARPA> w8sdz@brl.arpa (Keith Petersen <w8sdz>) writes:
>When Usenet can guarantee error-free and non-truncated transmission of clear
>text files  I will agree to posting clear text files.  Until that day
>arrives (is anyone working on it?) I will continue to post them as ARC
>files in the comp.binaries.ibm.pc newsgroup.

the advantage of arc files is that arc includes crc's, where as shar doesn't.
however, shar does include character and line counts, and if a file is the
victim of a line hit, at least you might be able to figure out what happened.
with arc files, if you take a hit and an INSTRUCTION gets changed, you may
never figure the problem out.

binaries are fundementally worthless because when broken only a person with
source can fix them.  and with the case of certain `shareware' products,
will only provide the fix for a $$ ``donation''.

- john.
-- 
John F. Haugh II                 | "If you aren't part of the solution,
River Parishes Programming       |  you are part of the precipitate."
UUCP:   ihnp4!killer!rpp386!jfh  | 		-- long since forgot who
DOMAIN: jfh@rpp386.uucp          | 

rroot@edm.UUCP (Stephen Samuel) (06/02/88)

From article <4732@dasys1.UUCP>, by tneff@dasys1.UUCP (Tom Neff):
> John Nelson makes some interesting points about shar vs. ARC or whatever
> for posting sources and binaries to Usenet.  Let me make a couple of points
> of my own.
I'l just repeat the arguments -- Mainly: ARC/ZOO files result in binaries
which are a BITCH to read from inside of rnews/etc or any other time when
ARC/ZOO isn't directly accessable, while, on the other hand, ARC gives nice
checksumming.
also: ARC results in a binary file which has to be UUENCODEd to transmit.
A uuencoded file eats more
(1) disk space 
(2) cpu time to decode if you're trying to read it 
(3) transmission bandwith (even if it's compressed by the transmitter) 
(4) serenity
  (a) (I get PISSED OFF when I try to read an article only to find
	some endoded GARBAGE)
  (b) To get one source set onto floppy so I could give it to a friend,
	(the arc file wouldn't fit on one floppy) I had to:
	i.   find an ARC for my system
	ii.  compile ARC 
	iii. save the program I wanted
	iv.  peel the headers
	v.   combine and UUDECODE it, 
	vi.  unarc it
	vii. kermit the pieces over to my PC
Had it been shared, I would have been able to work with it a LOT more easily.

(5) very few errors that a char count doesn't:
Checksums CATCH, but do not FIX transmisson problems.
Usenet (uucp/ftp) ALREADY does a CRC check in transmission so, for the most
part, ARC's checksum is redundant. Almost all errors seem to be in filesize
(truncation).  This is why a file size count is generally sufficient.
(it probably catches WELL over 99% of all USENET errors)

>  * So we have a slight dilemma.  But I propose that Rahul give us a
> technical solution!  He can add a new feature to ZOO that *generates
> a shell archive* from the input files, instead of a compressed

Yes, yes, yes!! I like!! 
  Just one thing: ZOO must create an ENTIRELY UNBINARY format in this case..
Even the 'magic number' must be ascii.
-- 
-------------
 Stephen Samuel 
  {ihnp4,ubc-vision,vax135}!alberta!edm!steve
  or userzxcv@uofamts.bitnet

ptripp@udenva.UUCP (06/03/88)

Keywords:

I propose that a source code newsgroup for IBM PC's and compatible micro-
computers be formed.  Appropriate titles would be:

	comp.sources.ibmpc
	comp.sources.msdos

The disadvantage with the later name is that it would imply the exclusion of
OS/2 sources, whereas the first name could be stretched to cover just about
any source code for the IBM PC world, including code for the PS/2 line.

I do not like encoded postings.

A source code newsgroup for ibmpc is long, long overdue.

Phil L. Tripp, University of Denver
ptripp@nike.cair.du.edu

wfp@dasys1.UUCP (William Phillips) (06/04/88)

In article <1785@van-bc.UUCP> skl@van-bc.UUCP (Samuel Lam) writes:

>With the current scheme, if an ARC file got hit during transmission, there
>is no easy way to detect this before you start unARCing, since none of those
>CRC's are of any use until you have got the *unARCed* content of the ARCed
>files.

Not true.  You have the very useful option with both arc and pkxarc to
_test_ your .arc file before you ever unarc it.  This is exactly what
I do with every .arc file I download.

pkxarc -t foo.arc
or
arc /t foo.arc  (I think this is the syntax -- I don't use arc any more)

It's fast, it's easy, and it works.


-- 
William Phillips                 {allegra,philabs,cmcl2}!phri\
Big Electric Cat Public Unix           {bellcore,cmcl2}!cucard!dasys1!wfp
New York, NY, USA                !!! JUST SAY "NO" TO OS/2 !!!

smith@ncoast.UUCP (06/04/88)

> Article <11803@ut-sally.UUCP> From: nather@ut-sally.UUCP (Ed Nather)

> In article <3186@bsu-cs.UUCP>, dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
> > 
> > Better still, we need comp.sources.msdos.  Why doesn't somebody propose
> > it?  The net population likes sources, so it should have no trouble
> > getting approved, and it might even encourage more source postings.
> > -- 
> 
> I second the motion.  Before the Great Net Reorganization we had a group
> for MS-DOS sources which was extremely popular -- some chaff, but a fair
> amount of wheat, too.  Now MS-DOS sources are posted to a lot of different
> newsgroups, many that are inappropriate, or are not posted at all.
> 
Discussions of this nature belong in news.groups according
to the guidelines put out by Spaf.
-- 
		      decvax!mandrill!ncoast!smith
			ncoast!smith@cwru.csnet 
		(ncoast!smith%cwru.csnet@csnet-relay.ARPA)

dick@slvblc.UUCP (Dick Flanagan) (06/04/88)

In article <4732@dasys1.UUCP> tneff@dasys1.UUCP (Tom Neff) writes:
> [... lots of good stuff ...]

>  * So we have a slight dilemma.  But I propose that Rahul give us a
> technical solution!  He can add a new feature to ZOO. . . .
> [...]

>    This seems to me to give us the best of both worlds.  What do you think?
> Rahul?

        Rahul will be off-net through the end of June.  You
        might want to email your proposal to him so it will
        be in his mailbox when he gets back.  Otherwise, I'm
        sure these articles will have long expired.

Dick

--
Dick Flanagan, vacation moderator of comp.binaries.ibm.pc
{backbones}!ucbvax!ucscc!slvblc!dick  or  slvblc!dick@ucscc.ucsc.edu

tneff@dasys1.UUCP (Tom Neff) (06/04/88)

In article <3128@edm.UUCP> rroot@edm.UUCP (Stephen Samuel) writes:
>From article <4732@dasys1.UUCP>, by tneff@dasys1.UUCP (Tom Neff):
>>  * So we have a slight dilemma.  But I propose that Rahul give us a
>> technical solution!  He can add a new feature to ZOO that *generates
>> a shell archive* from the input files, instead of a compressed
>
>Yes, yes, yes!! I like!! 
>  Just one thing: ZOO must create an ENTIRELY UNBINARY format in this case..
>Even the 'magic number' must be ascii.

I think if you read my posting carefully, you'll find that that's what
I wanted too.  Filesize, date/time, subdirectory info and checksum should
all be in plain ASCII so they can be transmitted everywhere.





-- 
Tom Neff			UUCP: ...!cmcl2!phri!dasys1!tneff
	"None of your toys	CIS: 76556,2536		MCI: TNEFF
	 will function..."	GEnie: TOMNEFF		BIX: are you kidding?

skl@van-bc.UUCP (Samuel Lam) (06/07/88)

In article <4802@dasys1.UUCP>, wfp@dasys1.UUCP (William Phillips) wrote:
>In article <1785@van-bc.UUCP> skl@van-bc.UUCP (Samuel Lam) writes:
>>With the current scheme, if an ARC file got hit during transmission, there
>>is no easy way to detect this before you start unARCing, since none of those
>>CRC's are of any use until you have got the *unARCed* content of the ARCed
>>files.
>Not true.  You have the very useful option with both arc and pkxarc to
>_test_ your .arc file before you ever unarc it. ...

Did you know that testing an archive with -t (or /t, or t) are nothing
more than just *unarcing* the files in the archive and redirect the output
to the bit bucket?  (That's what the (de)archiver does internally in
response to the test-archive request.)

Therefore, testing an archive before extracting does not provide any
protection which extracting right away won't give.  And those corruptions
which drive the extracting process nuts will drive the test-archive
process nuts just the same.

-- 
Samuel Lam     {ihnp4!alberta,watmath,uw-beaver,ubc-vision}!ubc-cs!van-bc!skl