pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/14/91)
On 11 Feb 91 22:25:00 GMT, pcg@cs.aber.ac.uk (Piercarlo Grandi) said:
[ ... on the average or median size of *open*, that is
dynamically referenced, files ... ]
lm> And this buffer cache for UFS meta data has been here since the
lm> beginning, it was designed in. So, after throwing out all the
lm> directories, devices, and 0 length files, we have:
lm> [ ... ]
lm> Funny. Not a lot of stuff around 1K. By the way, this is a server, 32 MB
lm> of ram, 4GB of disk, used for kernel builds (one was going on while I took
lm> the stats).
pcg> Ah, this "benchmark" is entirely irrelevant.
pcg> [ ... exclude text images, kernel build is a single process
pcg> application, third the machine is grossly overdimensioned for
pcg> a kernel build ... ]
pcg> The numbers I see on the timesharing systems at this site are very
pcg> different. Today I am the only user logged in. I will try to
pcg> gather pstat -i statistics when there is a little more load during
pcg> a peak hour.
Here they are. A quick and dirty thing on two Ultrix 4.0 DECsystem 5830s
used mainly for timesharing, and a SunOS 4.1 Sun 3/60 used for
timesharing and file and compute service. The 5830s had load averages of
around 10-12 with around a couple dozen users each when the measurements
were taken, and the 3/60 had nearly a dozen users with a load average of
abou 4-5.
The shell script used to process the pstat -i outoutput is:
sed '1,2d' \
| case "$1" in
ultrix) egrep -v ' [TM] |,...$' \
| awk '{print int((substr($0,62,10)+1023)/1024)}';;
sunos) egrep '(DIR|REG|LNK)$' \
| awk '{print int((substr($0,47,10)+1023)/1024)}';;
*) echo "$1" is not sunos or ultrix; exit 1;;
esac \
| sort -n | uniq -c
Note that the methodology is different under Ultrix and SunOS; under
Ultrix Text files and Mount point directories are excluded, but all
directories and symbolic links and regular files are included. Ultrix
does not have a special directory buffer cache, even if it has a
directory entry cache. Under SunOS symbolic links and directories are
included and all regular files, even executables, just because it is
easier that way. You seem to indicate that directories should be
excluded because there is a separate buffer cache for them; but let's
include them just to have an idea of how many are open.
The numbers are (first subcolumn is count, second is size in KBytes):
5830 #1 5830 #2 3/60
12 0 32 0 7 0
224 1 231 1 202 1
26 2 66 2 30 2
18 3 22 3 13 3
7 4 14 4 8 4
2 5 30 5 7 5
4 6 7 6 8 6
1 9 2 7 6 7
2 10 4 8 6 8
1 14 3 10 1 10
1 16 2 11 2 11
3 17 3 14 1 12
2 26 1 16 1 13
(truncated here; there quite a few larger sizes, but nearly all with a
count of 1).
The total number of inodes shown by pstat is over 600 for each of the
5830s and over 400 for the 3/60. From some other experiment, it seems
that about half of the 1KB open files are directories.
These numbers are quite different from yours. I would tend to attribute
that to the fact that a kernel build is a very particular single process
application. Timesharing systems, even very underloaded ones like those
above, are quite different. Variability reigns.
I would also like to reissue the static file size statistics for the /usr
partitions on the same machines (not on my home SysV machine that does
not have them), because I have realized that on should really include
symbolic links; after all they *are* read in as well with their own good
IO transaction. The description of the environment for the measurements
was, and has remained:
pcg> I have also had a look at the /usr trees of SunOS 4.1 and an Ultrix
pcg> 4.1 machine. The results follow; they include only the size of
pcg> files, not those of directories or symbolic links, and are a bit
pcg> crude. The SysV partitions have around 3,000 inodes each; the SunOS
pcg> /usr partition has almost 50MB with 5,700 inodes and the Ultrix
pcg> /usr has almost 250MB with 10,000 inodes.
The script used to produce the new figures is:
ls -1Rs $* | egrep -v '^total|^$|[@:]$' \
| sed 's/^\(....\) .*/\1/' | sort -n | uniq -c
I repeat the figures for the same partitions as my previous posting, for
comparison purposes; note that work has been going on in the /usr tree
of both systems in the past couple of days, like system generations:
DEC links DEC no links Sun links Sun no links
338 0 355 0 28 0 22 0
28782 1 4886 1 5720 1 2762 1
2078 2 1595 2 903 2 726 2
1422 3 1404 3 560 3 460 3
706 4 705 4 391 4 310 4
370 5 360 5 277 5 216 5
435 6 435 6 185 6 147 6
200 7 200 7 147 7 109 7
249 8 249 8 101 8 70 8
234 9 232 9 74 9 53 9
232 10 238 10 71 10 49 10
161 11 157 11 65 11 43 11
162 12 160 12 48 12 35 12
84 13 90 13 31 13 18 13
67 14 67 14 21 14 12 14
106 15 106 15 25 15 16 15
83 16 83 16 200 16 182 16
38 17 40 17 25 17 7 17
145 18 147 18 14 18 9 18
48 19 46 19 11 19 6 19
34 20 33 20 16 20 11 20
18 21 18 21 11 21 4 21
125 22 129 22 8 22 6 22
--
Piercarlo Grandi | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk