[comp.unix.wizards] open

roy@phri.UUCP (04/06/87)

	In the MtXinu 4.3/NFS sticky(8) man page, it says:  "Neither open
(2) nor mkdir (2) will create a file with the sticky bit set."  Why not?
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

"you can't spell deoxyribonucleic without unix!"

naim@nucsrl.UUCP (Naim Abdullah) (11/17/87)

I have run across a kernel device driver bug in System V, rel. 3.1
on ATT 3b2s. The bug is that opening /dev/ni for writing causes
a kernel panic. This is the documented way of sending data over 3BNET
and now that it causes a panic, I don't know what else to use for
sending packets over 3BNET. Note that this used to work fine in rel. 2.1.
I only discovered that it no longer worked, when we upgraded to rel. 3.1.
Here is a one liner that causes a panic (you can run this from any account).

main(){ return(open("/dev/ni", 1)); }

Running this gives:
TRAP
proc= 401CDCC0 psw= 280072B
pc= 4002AE74
PANIC: KERNEL MMU FAULT (F_ACCESS)

Please don't flame me for posting this program to cause the crash. If your
machine has this bug, I think you should know about this. Just turn off
write permissions on /dev/ni if you don't want Joe User to crash your
machine.

The strange thing is that nisend(1) continues to work fine. I wonder how
it sends data over 3BNET ?

Anyway, has anybody discovered any workarounds ? We have 5.3.1 sources so
bug fixes are welcome.


		      Naim Abdullah
		      Dept. of EECS,
		      Northwestern University

		      Internet: naim@eecs.nwu.edu
		      Uucp: {ihnp4, chinet, gargoyle}!nucsrl!naim

P.S: Notes cannot cross-post (yet), so I have posted copies of
this article in comp.unix.wizards, comp.unix.questions, comp.sys.att.
So you may see this article more than once.

chris@mimsy.UUCP (Chris Torek) (05/20/89)

In article <8295@june.cs.washington.edu> ka@june.cs.washington.edu
(Kenneth Almquist) writes:
>...  So I tried writing a little C program
>that opened and closed a file in the current directory 1000 times:
>
>	17.3 real    0.0 user    3.2 sys
>
>This is on an otherwise idle system running Ultrix 3.0, a 4.2 BSD
>derivative.

Here is mine (entered with the `cat' editor :-) ), and my results:

	main(){register int fd,i;
	for(i=1000;--i>=0;) fd=open("foo",0), close(fd);
	}

	% time ./t
	0.1u 0.9s 0:01 84% 1+3k 0+0io 2pf+0w

4.3BSD > Ultrix ... ?
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

thomas@mipsbx.nac.dec.com (Matt Thomas) (05/22/89)

>In article <8295@june.cs.washington.edu> ka@june.cs.washington.edu
>>...  So I tried writing a little C program
>>
>>	17.3 real    0.0 user    3.2 sys
>>
>>This is on an otherwise idle system running Ultrix 3.0, a 4.2 BSD
>>derivative.
>[program removed]
>
>	% time ./t
>	0.1u 0.9s 0:01 84% 1+3k 0+0io 2pf+0w
>
>4.3BSD > Ultrix ... ?
>-- 
>In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
>Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

Hard to say, as no clue was given to the type of machine and/or disks.

    VAX 6220 (Ultrix V3.0, RA82):  0.0u 1.1s 0:00 120% 1+5k 0+1io 0pf+0w
    DS3100 (Ultrix V3.1, RZ23):    0.0u 0.2s 0:00 100% 10+21k 0+0io 0pf+0w
    uVAX II (Ultrix V3.0, RA81):   0.0u 2.8s 0:02 100% 1+5k 0+0io 0pf+0w
    VAX 8700 (Ultrix V3.1, RA81):  0.0u 0.5s 0:00 83% 1+5k 0+0io 0pf+0w

Which are rougly equivalent to the BSD 4.3 numbers.  When running over NFS,
things really slow down.   

 VAX 8700 (Ultrix V3.1, RA81+NFS): 0.0u 1.8s 0:07 26% 2+6k 0+0io 0pf+0w

-- 
Matt Thomas                     Internet:   thomas@decwrl.dec.com
DECnet-Ultrix Development       UUCP:       ...!decwrl!thomas
Digital Equipment Corporation   Disclaimer: This message reflects my own
Littleton, MA                               warped views, etc.

schwartz@shire.cs.psu.edu (Scott Schwartz) (05/22/89)

1000 open's and closes:

Kenneth Almquist writes:
	17.3 real    0.0 user    3.2 sys
 This is on an otherwise idle system running Ultrix 3.0, a 4.2 BSD


Chris Torek writes:  
	0.1u 0.9s 0:01 84% 1+3k 0+0io 2pf+0w
 4.3BSD > Ultrix ... ?

Matt Thomas writes:
    VAX 6220 (Ultrix V3.0, RA82):  0.0u 1.1s 0:00 120% 1+5k 0+1io 0pf+0w
    DS3100 (Ultrix V3.1, RZ23):    0.0u 0.2s 0:00 100% 10+21k 0+0io 0pf+0w
    uVAX II (Ultrix V3.0, RA81):   0.0u 2.8s 0:02 100% 1+5k 0+0io 0pf+0w
    VAX 8700 (Ultrix V3.1, RA81):  0.0u 0.5s 0:00 83% 1+5k 0+0io 0pf+0w
    VAX 8700 (Ultrix V3.1, RA81+NFS): 0.0u 1.8s 0:07 26% 2+6k 0+0io 0pf+0w

I write:
    Sun3/60  (SunOS4.0, NFS):   0.0u 2.4s 0:07 34% 0+80k 0+0io 0pf+0w
    Sun4/260 (SunOS4.0, Eagle): 0.0u 0.2s 0:00 110% 0+152k 0+0io 0pf+0w
    Sun4/280 (SunOS4.0, NFS):   0.0u 1.8s 0:04 40% 0+160k 0+0io 0pf+0w
-- 
Scott Schwartz		<schwartz@shire.cs.psu.edu>

chris@mimsy.UUCP (Chris Torek) (05/22/89)

>>Article <8295@june.cs.washington.edu>, from ka@june.cs.washington.edu:
[1000 open() calls]
>>>	17.3 real    0.0 user    3.2 sys

In article <17643@mimsy.UUCP> I wrote:
>>	% time ./t
>>	0.1u 0.9s 0:01 84% 1+3k 0+0io 2pf+0w
>>
>>4.3BSD > Ultrix ... ?

In article <2491@shlump.dec.com> thomas@mipsbx.nac.dec.com (Matt Thomas)
replies:
>Hard to say, as no clue was given to the type of machine and/or disks.

True.

>    VAX 6220 (Ultrix V3.0, RA82):  0.0u 1.1s 0:00 120% 1+5k 0+1io 0pf+0w
>    DS3100 (Ultrix V3.1, RZ23):    0.0u 0.2s 0:00 100% 10+21k 0+0io 0pf+0w
>    uVAX II (Ultrix V3.0, RA81):   0.0u 2.8s 0:02 100% 1+5k 0+0io 0pf+0w
>    VAX 8700 (Ultrix V3.1, RA81):  0.0u 0.5s 0:00 83% 1+5k 0+0io 0pf+0w
>
>Which are rougly equivalent to the BSD 4.3 numbers.  When running over NFS,
>things really slow down.   
>
> VAX 8700 (Ultrix V3.1, RA81+NFS): 0.0u 1.8s 0:07 26% 2+6k 0+0io 0pf+0w

To fill in the missing details:

   VAX 11/785 (4.3BSD-tahoe, RA81): 0.1u 0.9s 0:01 84% 1+3k 0+0io 2pf+0w

An 11/780 is supposed to be `pretty close' to a MicroVAX II, and
an 11/785 is supposed to be `about' 50% faster.  Scaling the Ultrix
`1+5k' by 1/1.5 gives an expected sys (`s') time of 1.86..., not the
0.9 observed.  Apparently name lookups in 4.3BSD-tahoe have somewhat
less overhead than those in Ultrix 3.0.  (The cost may go up again
in 4.4 as a consequence of GVFS.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

matt@oddjob.uchicago.edu (Matt Crawford) (05/22/89)

) (Kenneth Almquist) writes:
) >that opened and closed a file in the current directory 1000 times:
) >	17.3 real    0.0 user    3.2 sys
) >Ultrix 3.0, a 4.2 BSD

Chris Torek writes:
) 	0.1u 0.9s 0:01 84% 1+3k 0+0io 2pf+0w
) 4.3BSD > Ultrix ... ?

Sun-3/280, SunOS 3.5.2:
0.0u 0.5s 0:00 87% 0+8k 2+0io 0pf+0w

Sun > Vax ?
________________________________________________________
Matt Crawford	     		matt@oddjob.uchicago.edu

ka@june.cs.washington.edu (Kenneth Almquist) (05/24/89)

I wrote:
>> ...  So I tried writing a little C program
>> that opened and closed a file in the current directory 1000 times:
>>
>> 	17.3 real    0.0 user    3.2 sys
>>
>> This is on an otherwise idle system running Ultrix 3.0, a 4.2 BSD
>> derivative.

Chris Torek's numbers:
> 	% time ./t
> 	0.1u 0.9s 0:01 84% 1+3k 0+0io 2pf+0w
> 
> 4.3BSD > Ultrix ... ?

Torek > ka.  The file I was opening was on an NFS file system rather
than a local file system.  Using a local file decreases the times to

	0.4 real    0.0 user    0.4 sys

Also, the using the empty file rather than an executable program for
the true command is faster when NFS is not involved:

	2.8 real    0.3 user    2.3 sys		(when using empty file)
	4.2 real    0.3 user    3.8 sys		(when using executable)

Kenneth Almquist