guy@rlgvax.UUCP (07/10/83)
There is a bug fix to make "chroot" secure under V7 (under vanilla V7,
even if you set a processes' root directory to "/usr/guest", "/.." refers to
"/usr", and thus you can't use "chroot" to box a user into a restricted
environment); the same fix was made in 4.1BSD and System III and later USG
UNIX releases. The fix follows, along with another fix to forbid creating
files in a directory with a zero link count. This latter fix prevents the
creation of "orphan" files with the sequence:
mkdir foo
cd foo
rmdir ../foo
>orphan
These are the 4.1BSD fixes; the System III fix to the second problem puts
the test on the line that reads:
if((dp->i_mode&IFMT) != IFDIR)
and changes it to:
if((dp->i_mode&IFMT) != IFDIR || dp->i_nlink==0)
which catches the problem slightly earlier (skipping the search of the directory
entirely) and returns the error ENOTDIR instead of ENOENT.
*** nami.c.orig Sun Jul 10 15:01:57 1983
--- nami.c Sun Jul 10 15:03:16 1983
***************
*** 99,104
u.u_segflg = 1;
eo = 0;
bp = NULL;
eloop:
--- 99,107 -----
u.u_segflg = 1;
eo = 0;
bp = NULL;
+ if (dp == u.u_rdir && u.u_dbuf[0] == '.' &&
+ u.u_dbuf[1] == '.' && u.u_dbuf[2] == 0)
+ goto cloop;
eloop:
***************
*** 111,117
if(u.u_offset >= dp->i_size) {
if(bp != NULL)
brelse(bp);
! if(flag==1 && c=='\0') {
if(access(dp, IWRITE))
goto out;
u.u_pdir = dp;
--- 114,120 -----
if(u.u_offset >= dp->i_size) {
if(bp != NULL)
brelse(bp);
! if(flag==1 && c=='\0' && dp->i_nlink) {
if(access(dp, IWRITE))
goto out;
u.u_pdir = dp;
Guy Harris
{seismo,mcnc,we13,brl-bmd,allegra
swatt@ittvax.UUCP (Alan S. Watt) (07/13/83)
A lot of discussion on this. I've thought about using the "chroot" call to dump people into a separate environment where they can't do any damage except to each other. If you plan to do this, you should be aware that a separate filesystem does NOT give you a separate "virtual machine". Several obvious differences: 1) All processes on the same CPU vie for the same fixed pool of CPU time. Nothing stops people on your "protected" environment from firing up lots of CPU-eating processes. 2) Lots of things in UNIX are global to the kernel, and not separate for each file system. One example is process ID's. If you had a super-user on the "protected" system, and a "ps" command which could look up the right system symbol table and peek into kernel memory, said super-user could kill processes owned by users on the outside environment. For this reason, you can't "recycle" user ID's and have the same UID used both by a normal user and one in the protected environment; the "protected" user could write a program which executed the system call: kill(-1,9); /* If that's the right order */ which would kill all processes for which he had permission. The idea still has a lot of merit, as it does accomplish one major goal, which is to make data on the main part of the system inaccessable to users on the protected environment. How about this for a starting point: 1) Have a login "guests" or something, with user-id 0, running a special program which does a "chroot" to the protected area (let's just say "/subenviron"), and does the following other things: a) Sets "nice" to 2 or thereabouts. b) If running VMUNIX, uses the "vmlimit" command to set limits on CPU time, dataspace, etc. (this may be pointless; "csh" will allow them to set it back easily enough). c) Execs "/bin/login" (which is really "/subenviron/bin/login"). 2) This second login program would read "/subenviron/etc/passwd" for the list of valid login ID's. System administrators would take care that all UID's and GID's in this file were unique to this sub-environment, and did not occur in the "main" system, or in other sub-environments (say by adding N*1000 to them, where <N> is the number of the sub-environment). In no case should an entry for the super-user appear here. 3) The sub-environment would have all device entries for disks, tapes, etc., removed. 4) There are still some other holes, for example Tty devices will have to appear in this restricted environment, and users could do the usual "stty raw noecho >/dev/tty$random_name", and screw up some poor person in the main environment on that tty node. This could be eliminated by making the program which issues the "chroot" call instead set up a connection to a pty master node, where the slave node exists in the sub-environment, where there is already an "init" running, having done a "chroot". This adds considerably to the overhead however. Anything I've missed? It seems much simpler to just define another "VM" user, and run another copy of "UTS" in that virtual machine partition. Guaranteed to work and give you absolute separation of the two environments. Further, you can use "VM" tools to assign various priority restrictions to one partition. - Alan S. Watt