[comp.std.c] What's a good prototype for write

friedl@vsi.COM (Stephen J. Friedl) (10/25/88)

What is a proper prototype for write(2)?  It would have to
be one of:

	extern int write(int, const void *, unsigned);
or
	extern int write(int, const void *, int);

both of which are wrong in some respect.  I've seen
the first used, but how does this deal with a successful
write very near the maximum unsigned value?  The return
would then appear to be negative.

     Steve

P.S. - I know that write(2) is not specified in the dpANS.

-- 
Steve Friedl    V-Systems, Inc.  +1 714 545 6442    3B2-kind-of-guy
friedl@vsi.com     {backbones}!vsi.com!friedl    attmail!vsi!friedl
----Nancy Reagan on 120MB SCSI cartridge tape: "Just say *now*"----

faustus@ic.Berkeley.EDU (Wayne A. Christopher) (10/26/88)

In article <902@vsi.COM>, friedl@vsi.COM (Stephen J. Friedl) writes:
> 	extern int write(int, const void *, unsigned);
>
> ... how does this deal with a successful
> write very near the maximum unsigned value?  The return
> would then appear to be negative.

If you have a program that has a good reason for wanting to write 2 billion
bytes at a time, please tell me what it is...

	Wayne

gwyn@smoke.BRL.MIL (Doug Gwyn ) (10/26/88)

In article <902@vsi.COM> friedl@vsi.COM (Stephen J. Friedl) writes:
>What is a proper prototype for write(2)?

	extern int write(int, const char *, unsigned);
describes the existing function and should be used if you want to
avoid relying on the automatic parameter conversions that prototypes
provide (in order that your code continue to work on older systems).

>but how does this deal with a successful
>write very near the maximum unsigned value?

You have to cast the result to unsigned before comparing it to the
requested count.  But before you do that, test for -1 (error).

>P.S. - I know that write(2) is not specified in the dpANS.

But it is specified by IEEE 1003.1-1988 (they don't use prototypes).
My copy isn't at hand so I can't check just how it ended up.

gwyn@smoke.BRL.MIL (Doug Gwyn ) (10/26/88)

In article <6794@pasteur.Berkeley.EDU> faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
-In article <902@vsi.COM>, friedl@vsi.COM (Stephen J. Friedl) writes:
-> 	extern int write(int, const void *, unsigned);
-> ... how does this deal with a successful
-> write very near the maximum unsigned value? 
-If you have a program that has a good reason for wanting to write 2 billion
-bytes at a time, please tell me what it is...

Lots of people have had trouble with this for modest request sizes,
say from 32K to 64K-1.

rbutterworth@watmath.waterloo.edu (Ray Butterworth) (10/27/88)

In article <6794@pasteur.Berkeley.EDU>, faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
> If you have a program that has a good reason for wanting to write 2 billion
> bytes at a time, please tell me what it is...

16-bit ints can only hold about 32K.


> In article <902@vsi.COM>, friedl@vsi.COM (Stephen J. Friedl) writes:
> >     extern int write(int, const void *, unsigned);
> > ... how does this deal with a successful
> > write very near the maximum unsigned value?  The return
> > would then appear to be negative.

How about:
    extern ptrdiff_t write(int, const void *, size_t);

since the last parameter has to be as big as the buffer
(which you can take sizeof), and the return value has
to be at least that big and also allow a negative value.

friedl@vsi.COM (Stephen J. Friedl) (10/27/88)

I write:
<
< 	extern int write(int, const void *, unsigned);
<
< ... how does this deal with a successful
< write very near the maximum unsigned value?  The return
< would then appear to be negative.

Wayne A. Christopher writes:
<
< If you have a program that has a good reason for wanting to write 2 billion
< bytes at a time, please tell me what it is...

There are 16-bit machines out there for which this is
an entirely reasonable question.

-- 
Steve Friedl    V-Systems, Inc.  +1 714 545 6442    3B2-kind-of-guy
friedl@vsi.com     {backbones}!vsi.com!friedl    attmail!vsi!friedl
----Nancy Reagan on 120MB SCSI cartridge tape: "Just say *now*"----

dhesi@bsu-cs.UUCP (Rahul Dhesi) (10/27/88)

In article <902@vsi.COM> friedl@vsi.COM (Stephen J. Friedl) wants
to choose between
>	extern int write(int, const void *, unsigned);
and
>	extern int write(int, const void *, int);

If you assume the ability to write more bytes than will fit in an int,
then neither is correct, since write() returns the number of bytes
actually written.  A correct form is, with the exception that
I'm not sure if "const" is needed, is:

     extern unsigned int write (int fd, void *, unsigned int count);

The return value from write is then the number of bytes actually
written, or defined to be ERR_UNSIGNED in case of an error.  Naturally,
you would have to make sure that write() does indeed return
ERR_UNSIGNED and change the manuals accordingly.

Also:

#define   ERR_UNSIGNED    ((unsigned int) -1)
-- 
Rahul Dhesi         UUCP:  <backbones>!{iuvax,pur-ee}!bsu-cs!dhesi

ok@quintus.uucp (Richard A. O'Keefe) (10/27/88)

In article <4507@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
>     extern unsigned int write (int fd, void *, unsigned int count);
	     ^^^^^^^^^^^^			 ^^^^^^^^^^^^
Shouldn't these be "size_t", or am I more than usually confused?

gwyn@smoke.BRL.MIL (Doug Gwyn ) (10/28/88)

In article <21763@watmath.waterloo.edu> rbutterworth@watmath.waterloo.edu (Ray Butterworth) writes:
>How about:
>    extern ptrdiff_t write(int, const void *, size_t);
>since the last parameter has to be as big as the buffer
>(which you can take sizeof), and the return value has
>to be at least that big and also allow a negative value.

NO!  There is no guarantee that any C data object can be written
with a single call to write().  Don't go changing the interface
from what it is to what you think it should have been.  Invent a
new function instead.

jfh@rpp386.Dallas.TX.US (The Beach Bum) (10/28/88)

In article <6794@pasteur.Berkeley.EDU> faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
>In article <902@vsi.COM>, friedl@vsi.COM (Stephen J. Friedl) writes:
>> ... how does this deal with a successful
>> write very near the maximum unsigned value?
>
>If you have a program that has a good reason for wanting to write 2 billion
>bytes at a time, please tell me what it is...


I can't think of too many programs where that would be true.

But wanting to write 32K is something well within reason.
-- 
John F. Haugh II                        +----Make believe quote of the week----
VoiceNet: (214) 250-3311   Data: -6272  | Nancy Reagan on Richard Stallman:
InterNet: jfh@rpp386.Dallas.TX.US       |          "Just say `Gno'"
UucpNet : <backbone>!killer!rpp386!jfh  +--------------------------------------

rbutterworth@watmath.waterloo.edu (Ray Butterworth) (10/28/88)

In article <8777@smoke.BRL.MIL>, gwyn@smoke.BRL.MIL (Doug Gwyn ) writes:
> In article <21763@watmath.waterloo.edu> rbutterworth@watmath.waterloo.edu (Ray Butterworth) writes:
> >    extern ptrdiff_t write(int, const void *, size_t);

> NO!  There is no guarantee that any C data object can be written
> with a single call to write().

Just as there is no guarantee that any data object whose size
will fit into an (int) can be written with a single call to write().

Short of inventing a new type, say (io_size_t), using (size_t) is no
worse than using (int), and in some ways is more appropriate since the
value of that parameter is likely to come from an expression involving
sizeof.

> Don't go changing the interface from what it is to what you think
> it should have been.

I wasn't.  The original request asked for a "good" prototype,
not a "correct" one.  To get the correct definition, one simply
has to look at the existing UNIX standard, man page, library source,
or header file, and hope most of them are the same.

One question though.  If the "correct" type is (int) or (unsigned int),
does that mean that 16-bit-int machines will not be allowed to have
write() functions that can write more than 16-bits worth of data
even though the non-C part of the software and hardware is quite
capable of it?  That seems like a silly restriction.  In particular,
that makes it impossible to write a C program for a 16-bit C compiler
that can read a tape written with 128K blocks.  ("sorry, you'll have
to use fortran; C can't handle that"?)

henry@utzoo.uucp (Henry Spencer) (10/29/88)

In article <902@vsi.COM> friedl@vsi.COM (Stephen J. Friedl) writes:
>What is a proper prototype for write(2)?  It would have to
>be one of:
>	extern int write(int, const void *, unsigned);
>or
>	extern int write(int, const void *, int);

The latter is correct.  See any Unix manual.  Yes, this means that the
size of a write buffer is limited to 32767 bytes on a 16-bit machine,
and that it is possible to create C objects that are too big for a
single write call.
-- 
The dream *IS* alive...         |    Henry Spencer at U of Toronto Zoology
but not at NASA.                |uunet!attcan!utzoo!henry henry@zoo.toronto.edu

levy@ttrdc.UUCP (Daniel R. Levy) (10/29/88)

In article <8777@smoke.BRL.MIL>, gwyn@smoke.BRL.MIL (Doug Gwyn ) writes:
> 
> NO!  There is no guarantee that any C data object can be written
> with a single call to write().  Don't go changing the interface
> from what it is to what you think it should have been.  Invent a
> new function instead.

I'd bet that, this being so, it's an easy error in code that uses write()
directly to do something like

	if (write(fd,(char *)&object,sizeof(object)) == -1) {
		perror(...);
		...
	} else {
		/* presume everything was hunkey dorey */
		...
	}

not taking into account the possibility of a "short write" which is yet
not the result of an error condition.  I was burned like this when porting
a game program which originally ran on System V, which to do a save writes
off its entire data segment in one big write(), to the Eunice emulation of
BSD under VMS.  write() would only work in 65k chunks, as I recall, and
I had to use a loop.

Given that write() has the "right" (no pun intended) to refuse to take the
entirety of a data object all in one shot, why isn't that exercised to solve
the problem of write() returning an int when it can be told to write an
unsigned number of bytes?  Can anyone relate any actual case histories where,
say, a 16 bit int machine wants to write off a 65k object (say, a block on a
tape)?
-- 
|------------Dan Levy------------|  THE OPINIONS EXPRESSED HEREIN ARE MINE ONLY
| Bell Labs Area 61 (R.I.P., TTY)|  AND ARE NOT TO BE IMPUTED TO AT&T.
|        Skokie, Illinois        | 
|-----Path:  att!ttbcad!levy-----|

gwyn@smoke.BRL.MIL (Doug Gwyn ) (10/30/88)

In article <21785@watmath.waterloo.edu> rbutterworth@watmath.waterloo.edu (Ray Butterworth) writes:
>One question though.  If the "correct" type is (int) or (unsigned int),
>does that mean that 16-bit-int machines will not be allowed to have
>write() functions that can write more than 16-bits worth of data
>even though the non-C part of the software and hardware is quite
>capable of it?  That seems like a silly restriction.  In particular,
>that makes it impossible to write a C program for a 16-bit C compiler
>that can read a tape written with 128K blocks.  ("sorry, you'll have
>to use fortran; C can't handle that"?)

Presumably such a system could provide an ioctl() or some other special
means of transferring such data, or provide internal buffering etc.

In fact, PDP-11 UNIX could not read or write tape records containing
more than 64K-1 bytes.  Note that ANSI standards for 1/2" magtape
prohibit records more than 2K bytes, and some magtape systems are
unable to handle more than 5K.  Thus inter-system portable interchange
is already constrained well below the size that causes problems for
read()/write(), and on a given system if one cannot write too-long
records then of course there is no problem with reading them back.

gwyn@smoke.BRL.MIL (Doug Gwyn ) (10/30/88)

In article <2991@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
-not taking into account the possibility of a "short write" which is yet
-not the result of an error condition.  I was burned like this when porting
-a game program which originally ran on System V, which to do a save writes
-off its entire data segment in one big write(), to the Eunice emulation of
-BSD under VMS.  write() would only work in 65k chunks, as I recall, and
-I had to use a loop.

Yes, you should always loop on successful writes until all the data is
transferred, unless it is known that exact record size is important (as
on a magtape duplicator).

I think I'll post my 9th Ed. UNIX-compatible "cat" source as an example.

gwyn@smoke.BRL.MIL (Doug Gwyn ) (10/30/88)

In article <1988Oct28.170735.23991@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>The latter is correct.  See any Unix manual.

Sorry, but not all UNIX manuals agree about this.  IEEE 1003.1 went with
the SVID on this; the third parameter to read() and write() is an unsigned
int.  Even on the PDP-11 UNIX I used to use, one could succeed in reading/
writing records up to 64K-2 bytes although the manual said that the
argument was a plain int.

ok@quintus.uucp (Richard A. O'Keefe) (10/30/88)

In article <1988Oct28.170735.23991@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>In article <902@vsi.COM> friedl@vsi.COM (Stephen J. Friedl) writes:
>>What is a proper prototype for write(2)?
>>	extern int write(int, const void *, unsigned);		or
>>	extern int write(int, const void *, int);
>
>The latter is correct.  See any Unix manual.

Well, if you see the System V Interface Definition, you find in WRITE(BA_OS)
on page 148 of Volume 1
	int write(filedes, buf, nbyte)
	int filedes;
	char *buf;
	unsigned nbyte;
	^^^^^^^^
Hardly surprising, as the argument is often (sizeof something), which is
unsigned.  I read this as meaning that you can write up to USI_MAX-1 bytes
[see p29 to find out what USI_MAX is].  You have to be careful and test
	int n = write(filedes, bug, nbyte);
	unsigned u = n;
	if (n == -1) { there was an error }
	else { u bytes were written }
rather than
	if (n < 0) { there was an error }
*OH* my broken programs...

Oddly enough, in a superb demonstration of consistency at its very best,
the SVID defines fread() and frwrite() in FWRITE(BA_OS) on p87 of the same
volume as taking >>int<< as the type of "size", and you pass in a
(sizeof something) which is in (INT_MAX,USI_MAX) fwrite() will write nothing.

dhesi@bsu-cs.UUCP (Rahul Dhesi) (10/30/88)

In article <582@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In article <4507@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
>>     extern unsigned int write (int fd, void *, unsigned int count);
>	     ^^^^^^^^^^^^			 ^^^^^^^^^^^^
>Shouldn't these be "size_t", or am I more than usually confused?

Yes.  :-)
-- 
Rahul Dhesi         UUCP:  <backbones>!{iuvax,pur-ee}!bsu-cs!dhesi

friedl@vsi.COM (Stephen J. Friedl) (10/31/88)

In article <8798@smoke.BRL.MIL>, gwyn@smoke.BRL.MIL (Doug Gwyn ) writes:
>
> In fact, PDP-11 UNIX could not read or write tape records containing
> more than 64K-1 bytes.

The drives on the AT&T 3B5 and 3B15 are limited to *8k* blocks.

Damn.

-- 
Steve Friedl    V-Systems, Inc.  +1 714 545 6442    3B2-kind-of-guy
friedl@vsi.com     {backbones}!vsi.com!friedl    attmail!vsi!friedl
----Nancy Reagan on 120MB SCSI cartridge tape: "Just say *now*"----

henry@utzoo.uucp (Henry Spencer) (11/02/88)

In article <1988Oct28.170735.23991@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>>	extern int write(int, const void *, int);
>
>The latter is correct.  See any Unix manual...

I'm told that this is yet another thing that System V has broken.  So,
amend that:  "See any *real* Unix manual..."
-- 
The dream *IS* alive...         |    Henry Spencer at U of Toronto Zoology
but not at NASA.                |uunet!attcan!utzoo!henry henry@zoo.toronto.edu