ccc@bu-cs.UUCP (Cameron Carson) (03/14/86)
[]
After an exhaustive examination of the code of the standard daemons
(well, I sort of glanced at rwhod and ftpd), I noticed that the
convention for disassociating the now-forked daemon from it's control
terminal seems to be something on the order:
int s;
for (s = 0; s < SOME_NUM; s++)
(void) close(s);
(void) open("/",0);
(void) dup2(0,1);
(void) dup2(0,2);
s = open("dev/tty", 2);
if (s >= 0) {
ioctl(s, TIOCNOTTY, 0);
(void) close(s);
}
My question is: why open "/" ? Why not open something a little
less vital like, say, /dev/null?
--
Cameron C. Carson
Distributed Systems Group
Boston University ACC
UUCP: ...!{harvard,allegra}!bu-cs!ccc
ARPA: ccc%bu-cs@csnet-relay.arpa
jsdy@hadron.UUCP (Joseph S. D. Yao) (03/22/86)
In article <261@bu-cs.UUCP> ccc@bu-cs.UUCP (Cameron Carson) writes: > int s; > for (s = 0; s < SOME_NUM; s++) > (void) close(s); > (void) open("/",0); > (void) dup2(0,1); > (void) dup2(0,2); >My question is: why open "/" ? Why not open something a little >less vital like, say, /dev/null? For what it's worth, this is exactly what I have done for every daemon I have had to write or fix. However, "if it ain't broke, don't fix it." Oh, I do freopen stdout as /dev/console, often. I hope you meant "/dev/tty" instead of "dev/tty" later on. Also, a setpgrp() is usually part of this disassociation process. -- Joe Yao hadron!jsdy@seismo.{CSS.GOV,ARPA,UUCP}
keith@motel6.UUCP (Keith Packard) (03/24/86)
In article <261@bu-cs.UUCP> ccc@bu-cs.UUCP (Cameron Carson) writes: > int s; > for (s = 0; s < SOME_NUM; s++) > (void) close(s); > (void) open("/",0); > (void) dup2(0,1); > (void) dup2(0,2); >My question is: why open "/" ? Why not open something a little >less vital like, say, /dev/null? Well, I suspect the answer to this lies in the dim dark past when unix ran on small machines. The inode for "/" is always in memory, the inode for "/dev/null" is only in memory when it is referenced. So, opening "/" instead of "/dev/null" will not cause another inode-table entry to be filled up. Useful when your system only has 50 or so incore inodes, considering that the daemon will *always* be running. Keith Packard ...!tektronix!reed!motel6!keith
bogstad@hopkins-eecs-bravo.arpa (William J. Bogstad) (03/27/86)
Keith Packard <keith%motel6.uucp@BRL.ARPA> says: >In article <261@bu-cs.UUCP> ccc@bu-cs.UUCP (Cameron Carson) writes: >> int s; >> for (s = 0; s < SOME_NUM; s++) >> (void) close(s); >> (void) open("/",0); >> (void) dup2(0,1); >> (void) dup2(0,2); >>My question is: why open "/" ? Why not open something a little >>less vital like, say, /dev/null? > >Well, I suspect the answer to this lies in the dim dark past when unix >ran on small machines. The inode for "/" is always in memory, the >inode for "/dev/null" is only in memory when it is referenced. > ... Useful when your system only has 50 or so incore inodes ... Also, opening, "/" is not dangerous. No one - not even root - can write on a directory. At least on 4.2BSD, you can't even open a directory for writting. So your daemon might use the directory for its standard input but can't do any damage. In addition, "/" is always there. "/dev/null" might not be there on a slightly trashed filesystem. If "/" is gone you can't even boot. A daemon you might run under those conditions is "update" the sync() daemon. Bill Bogstad {umcp-cs!jhunix allegra}!hopkins!bogstad bogstad@hopkins-eecs-bravo.arpa
bzs%bostonu.csnet@csnet-relay.arpa (03/28/86)
Well, when Cam came to me and asked why do they open "/" on stdin/stdout I guess I came up with everything people have been saying on the list (inode is in core anyhow, somehow harmless) but rejected the list as being uncompelling although I suspect some or all (probably the inode argument) were the reasons. The real question was, why open anything? Surely there's nothing functionally useful about opening stdin/stdout on "/" and it could be a potential hazard if ported. If you want to open something 'useful' I would say either /dev/console or a pipe to a syslogger (at least for output.) Maybe people fear bugs in their programs (or routines they've loaded) will magically start doing I/O (I believe there are still a few routines around that will do their own perror() which is a bug). Still seems weak. Maybe fears of inheriting a controlling terminal. It's just that it's ubiquitous, obviously someone did it that way for whatever reason and it got copied over and over (which is more or less what he was doing, using an existing daemon as a model for a new one, a fine idea in general.) Maybe rather than rationalizing the current kludge a useful replacement could be suggested, it seems like an opportunity (that is, no one will be sorry to see the opening of "/" go away if it were replaced by something useful.) Or maybe this is just arguing about how many angels will fit on the head of a pin (the answer is 7.) -Barry Shein, Boston University
ksh@rtgvax.UUCP (Kent S. Harris) (04/02/86)
In article <2177@brl-smoke.ARPA>, bzs%bostonu.csnet@csnet-relay.arpa (Barry Shein) writes: > > ...why do they open "/" on stdin/stdout... > ... > The real question was, why open anything? Surely there's > nothing functionally useful about opening stdin/stdout on > "/" and it could be a potential hazard if ported. > ... Yes, I believe there is. I've not followed this all the way to ground state, but I recall deep within the cobwebs something about stdio having some hard coded constants reguarding fd's < 3 (I know the kernel doesn't give two hoots about particular fd's). The idea is to close all fd's, open "/", and dup this fd to 1 and 2 so any new opens will be allocated fd's >= 3. Doing file stream i/o via stdio where an fd < 3 has been allocated seems to send data to ye old bit bucket (empirically determined). Let's hear from someone with a definitive answer.
jsdy@hadron.UUCP (Joseph S. D. Yao) (04/08/86)
In article <44@rtgvax.UUCP> ksh@rtgvax.UUCP (Kent S. Harris) writes: >In article <2177@brl-smoke.ARPA>, bzs%bostonu.csnet@csnet-relay.arpa (Barry Shein) writes: >> The real question was, why open anything? Surely there's >> nothing functionally useful about opening stdin/stdout on >> "/" and it could be a potential hazard if ported. >some hard coded constants reguarding fd's < 3 (I know the kernel doesn't >give two hoots about particular fd's). The idea is to close >all fd's, open "/", and dup this fd to 1 and 2 so any new opens will >be allocated fd's >= 3. /usr/include/stdio.h: #define stdin (&_iob[0]) #define stdout (&_iob[1]) #define stderr (&_iob[2])
henry@utzoo.UUCP (Henry Spencer) (04/28/86)
> [Why do daemons open / as stdin/stdout/stderr?] > The real question was, why open anything? Surely there's > nothing functionally useful about opening stdin/stdout on > "/" and it could be a potential hazard if ported... You have to open *something*, because innocently writing an error message to stderr could be a disaster if the program got 2 as the descriptor for an explicit open of some important file. This is one way of subverting setuid programs, in fact. Our daemons open /dev/null for stdin and stdout and a log file for stderr. -- Support the International League For The Derision Henry Spencer @ U of Toronto Zoology Of User-Friendliness! {allegra,ihnp4,decvax,pyramid}!utzoo!henry
rick@nyit.UUCP (Rick Ace) (05/06/86)
> > [Why do daemons open / as stdin/stdout/stderr?] > > The real question was, why open anything? Surely there's > > nothing functionally useful about opening stdin/stdout on > > "/" and it could be a potential hazard if ported... > > You have to open *something*, because innocently writing an error message > to stderr could be a disaster if the program got 2 as the descriptor for > an explicit open of some important file. This is one way of subverting > setuid programs, in fact. > > Our daemons open /dev/null for stdin and stdout and a log file for stderr. > -- > Support the International > League For The Derision Henry Spencer @ U of Toronto Zoology > Of User-Friendliness! {allegra,ihnp4,decvax,pyramid}!utzoo!henry Yes, the arguments about having to open *something* are indeed true. But, conceivably (not likely, I'll admit), someone might have removed /dev/null. If your daemons don't check for an error when they open it, you'll wind up with file descriptors 0 and 1 unopened, and the same setuid security bugs you're trying to avoid. It's a solid bet, though, that if your daemon is executing with uid 0, you'll be able to open "/" for reading. Given that you want to open something, "/" is at least as likely to exist as any other object in the filesystem, so it's a good choice in that regard. If the daemon were accidentally to read from file descriptor 0 ("/") and make some decisions based upon what it got, it could keep the system programmer occupied for a while :-). ----- Rick Ace Computer Graphics Laboratory New York Institute of Technology Old Westbury, NY 11568 (516) 686-7644 {decvax,seismo}!philabs!nyit!rick
stevesu@copper.UUCP (Steve Summit) (05/07/86)
> [Why do daemons open / as stdin/stdout/stderr?] > The real question was, why open anything? Surely there's > nothing functionally useful about opening stdin/stdout on > "/" and it could be a potential hazard if ported... Another reason for keeping file descriptors 0, 1, and 2 open is that it's remarkably easy to write code that depends on it. Consider the following fragment, which intends to determine the load average by inspecting the output of "uptime": int fdpair[2], pid; pipe(fdpair); pid = fork(); if(pid == 0) { close(1); dup(fdpair[1]); /* intends to become 1 */ execl("/usr/ucb/uptime", "uptime", 0); } close(fdpair[1]); read(fdpair[0], buf, BUFSIZ); while(wait(0) != pid); . . . (Please don't nit-pick this program; I am aware of at least eight things wrong with it, but I didn't want to obscure the example with all of the error-checking that a production program would require.) The problem with this program is that the sequence close(1); dup(fdpair[1]); will fail miserably if fdpair[1] happens to be 1, as in fact would be the case if this program were run with no file descriptors initially open. Using dup2() might help this example, and of course the best thing to do would be to use popen(), which is written so as to avoid this problem. When writing code that forks and execs other programs with my own attached input and output, I frequently find myself writing extremely strange code, such as the following: fd = open("/dev/littleredridinghood", 2); if(fd != 0) { close(0); dup(fd); } if(fd != 1) { close(1); dup(fd); } if(fd != 2) { close(2); dup(fd); } if(fd > 2) close(fd); execl("/big/bad", "wolf", 0); The big bad wolf program is given little red riding hood as standard input, output, and error. The checks of fd against 0, 1, and 2 are in case it is already 0, 1, or 2, which would happen if this program were invoked under some daemon which didn't leave 0, 1, and 2 open on something. I think that one of the main reasons that well-written daemons do leave 0, 1, and 2 open on something is there are probably a lot of programs out there that weren't written by somebody paranoid enough to check other file descriptors against 0 before closing 0. ("But 0 is always standard input, right?") Steve Summit tektronix!copper!stevesu
henry@utzoo.UUCP (Henry Spencer) (05/08/86)
> > Our daemons open /dev/null for stdin and stdout and a log file for stderr. > > Yes, the arguments about having to open *something* are indeed true. > But, conceivably (not likely, I'll admit), someone might have removed > /dev/null. If your daemons don't check for an error when they open it, > you'll wind up with file descriptors 0 and 1 unopened, and the same > setuid security bugs you're trying to avoid. Our daemons most assuredly check to make sure, not only that the open succeeded, but that it got the right descriptor. No competent programmer in his right mind does an open (or a malloc) without checking the result for failure. -- Join STRAW: the Society To Henry Spencer @ U of Toronto Zoology Revile Ada Wholeheartedly {allegra,ihnp4,decvax,pyramid}!utzoo!henry