[comp.lang.perl] print return code bug...

clewis@eci386.uucp (Chris Lewis) (05/05/90)

Unless I've missed something, this is supposed to fill up your
file system and then stop, but it ain't stopping:

$Header: perly.c,v 3.0.1.5 90/03/27 16:20:57 lwall Locked $
Patch level: 18

386/ix 1.0.6

> open(handle, ">/usr5/test");
> printf $!;
> $v = "1111111111111111111111111111111111111111111111111111111111111111111";
> $v = "$v$v$v$v$v$v$v$v$v$v$v";
> #	Speedups
> $v = substr($v, 0, 511);
> $! = "";
> while(1) {
>     (printf handle ($v)) || die "BOOM! $! $@";
> }

Be careful about running this ;-) if you want to test it, you might want
to try something different - the point is, printf doesn't appear to be
returning 0 on a write error (as the manual appears to imply it should -
it does say print is supposed to and then defines printf in terms of
print)..
-- 
Chris Lewis, Elegant Communications Inc, {uunet!attcan,utzoo}!lsuc!eci386!clewis
Ferret mailing list: eci386!ferret-list, psroff mailing list: eci386!psroff-list

lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) (05/06/90)

In article <1990May4.202029.9341@eci386.uucp> clewis@eci386 (Chris Lewis) writes:
: Unless I've missed something, this is supposed to fill up your
: file system and then stop, but it ain't stopping:
: 
: > open(handle, ">/usr5/test");
: > printf $!;
: > $v = "1111111111111111111111111111111111111111111111111111111111111111111";
: > $v = "$v$v$v$v$v$v$v$v$v$v$v";
: > #	Speedups
: > $v = substr($v, 0, 511);
: > $! = "";
: > while(1) {
: >     (printf handle ($v)) || die "BOOM! $! $@";
: > }
: 
: Be careful about running this ;-) if you want to test it, you might want
: to try something different - the point is, printf doesn't appear to be
: returning 0 on a write error (as the manual appears to imply it should -
: it does say print is supposed to and then defines printf in terms of
: print)..

Hmm, apparently fwrite() isn't checking the error status if it has to do a
flush.  Nothing much I can do about that.

You can check it yourself though if you say

	select(handle); $| = 1;

This causes an explicit fflush(), which perl checks the error status of.

On the other hand, perhaps I can force errno to a 0 before the fwrite()
and then see if it gets set.  Hang on, let me try that...

Ok, that seems to work ok.  I just hope nobody's fwrite() sets errno just
for the fun of it.

Someday I'm just gonna bypass fwrite() altogether...

Larry

ndjc@ccicpg.UUCP (Nick Crossley) (05/08/90)

In article <7996@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes:
>On the other hand, perhaps I can force errno to a 0 before the fwrite()
>and then see if it gets set.  Hang on, let me try that...
>Ok, that seems to work ok.  I just hope nobody's fwrite() sets errno just
>for the fun of it.

Unfortunately, some versions of stdio do just that, such as the one I am
running on at the moment, especially on the first access to a file.  stdio
calls isatty to determine buffering strategy; isatty calls ioctl to see if
that fails.  If the ioctl fails, as it will when the file is not a terminal,
it sets errno to ENOTTY, and rather stupidly isatty leaves it set that way.

So if you do check errno after calls to stdio routines, please ignore the
stupid ENOTTY.

Similarly, people need to be careful in their error handling code.  I have
seen such stuff as :-
	if	(some system call fails)
	{
		fprintf (stderr, "nice message");
		perror or other direct usage of errno;
	}
On many systems, the perror will report 'Not a terminal' if stderr has
been redirected.  This can confuse people if the system call was something
like a disk read!
-- 

<<< standard disclaimers >>>
Nick Crossley, ICL NA, 9801 Muirlands, Irvine, CA 92718-2521, USA 714-458-7282
uunet!ccicpg!ndjc  /  ndjc@ccicpg.UUCP

clewis@eci386.uucp (Chris Lewis) (05/09/90)

In article <7996@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes:
> In article <1990May4.202029.9341@eci386.uucp> clewis@eci386 (Chris Lewis) writes:

> Hmm, apparently fwrite() isn't checking the error status if it has to do a
> flush.  Nothing much I can do about that.

Hmm, I've just checked our fwrite and it does successfully check the return
code, as in this *does* work:

X	#include <stdio.h>
X	char buffer[30];
X	main() {
X	    FILE *f = fopen("/tmp/TRASH", "w");
X	    int len;
X	    ulimit(2, 20L); /* get the write to blow */
X	    while((len = fwrite(buffer, 1, sizeof(buffer), f)) == sizeof(buffer));
X	    fclose(f);
X	    printf("%d\n", len);
X	}

According to the manual page, fwrite returns the number of items successfully
written - zero for none.  I've tested the above with the buffer set to
30 and BUFSIZ (from stdio.h) and it works properly either way.

ulimit is a way to get System V to fail on a write after an open without
the more draconian approach of blowing the file system.

I've got a somewhat simpler Perl script that doesn't blow your file system
either (this is a shell script):

X	ulimit 1
X	perl -e 'while (0 != printf("Hello\n")) { ; }' > /tmp/TRASH2

perl writes 512 bytes and keeps eating CPU ....

Using the 
	errno = 0;
	fwrite...
	if (errno != 0)
is extremely unsafe (as someone else has already pointed out).

I should go look at the perl source and see if I can see what's happening.
Hmm, a quick look shows an unchecked fprintf in doio's "do_print", but
I don't think that clause is involved....  The fwrites in perl's
*.c's appear properly checked.

I have thought a bit further on this, and I have a suspicion that this
isn't really sufficient to check for write failures - since the close()
could do a flush, it could fail too.  Perhaps perl's close routine should
return a value that indicates write *or* pipe failure (the latter is
already supported by explicitly testing $?).  Maybe by testing ferror()?
-- 
Chris Lewis, Elegant Communications Inc, {uunet!attcan,utzoo}!lsuc!eci386!clewis
Ferret mailing list: eci386!ferret-list, psroff mailing list: eci386!psroff-list