[comp.sys.next] Misc Questions

aberno@questor.wimsey.bc.ca (Anthony Berno) (03/05/91)

Well, after getting my cube+upgrade, I asked my salesperson why I didn't get 
an OD with system 2.0 on it (it was preloaded on the hard disk) and he 
explained that I had to buy it separately. After grumbling about having to 
use my own optical to back up the system, I posted a query on the net, to 
make sure that my salesman wasn't doing a snow job on me, and I was assured 
that, yes, you have to buy the OD separately and that I wasn't being snowed. 
Well...

Today, my documentation arrived along with an OD containing system 2.0.

But that is a matter of only slight interest. I have some more pressing 
queries:

1) When you add the size of root (found by using the Inspector panel in the 
workspace) with the amount of free space on the disk, is it supposed to 
total the size of the disk? I am short by about 80 meg on a 660 meg disk. 
No, I don't have a partition, in fact I removed it when I got my computer by 
doing a build disk. The transcript from the build revealed that it should 
have 690 meg, and when I opened up the computer to fix a little rattle, the 
sticker on the disk said it was 760 meg!

Does anyone else have this problem with missing disk space? If you wouldn't 
mind, please do the above calculation and tell me what you get. I'm curious 
if this is normal - I suspect it is - but it bothers me to have an amount of 
space equal to 4 of the hard drives on my old Mac gone missing.

'2) Also, is it a bad sign if a hard disk rattles? I opened up the machine 
and stuck a few pieces of cardboard around the disk drive, and it stopped, 
and I'm happy. Was it simply a jiggly mount, or is my drive going to die?

Thanks very much to all past and future help. Believe me, I appreciate it!


 ---
    Anthony Berno (aberno@questor.wimsey.bc.ca)
      The QUESTOR Project: Free Public Access to Usenet & Internet in
                            Vancouver, BC, Canada, at +1 604 681 0670.

scott@erick.gac.edu (Scott Hess) (03/05/91)

In article <B15ay2w164w@questor.wimsey.bc.ca> aberno@questor.wimsey.bc.ca (Anthony Berno) writes:
   1) When you add the size of root (found by using the Inspector panel in the 
   workspace) with the amount of free space on the disk, is it supposed to 
   total the size of the disk? I am short by about 80 meg on a 660 meg disk. 
  No, I don't have a partition, in fact I removed it when I got my computer by 
   doing a build disk. The transcript from the build revealed that it should 
   have 690 meg, and when I opened up the computer to fix a little rattle, the 
   sticker on the disk said it was 760 meg!

   Does anyone else have this problem with missing disk space? If you wouldn't 
   mind, please do the above calculation and tell me what you get. I'm curious 
  if this is normal - I suspect it is - but it bothers me to have an amount of 
   space equal to 4 of the hard drives on my old Mac gone missing.

A big culprit would be the version of du (or the du-like functionality)
provided by the Workspace Inspector panel.  The default du(1) utility
provided gives the disk usage in bytes.  This is misleading because
few files require an integral number of blocks and fragments - most
leave some free at the end of their frags.  Thus, on average, if the
number of files in the system is n, the amount of data "lost" to
the space at the end of fragments is n*frag_size/2.

Of course, that's a simplistic answer.  Most executable files, for
instance, use a multiple of 8k of space, and thus require a whole
number of fragments (actually blocks).  On the other hand, large
files don't use fragments for the last couple k, and thus suffer
larger average data "losses".

Any way you put it, though, the more files you have, the more
space is lost to this phenomonon.

Anyhow, a solution to this particular problem is to grab the GNU
fileutil's version of du.  It count's the size in real space used,
as in space allocated to the file (thus counting some unused space,
too).  This should come closer to the real value.

Another place where data can be lost is due to the reserved blocks
on a disk which are reserved to help prevent fragmentation.  BSD
FFS uses a weird and wonderful (with the emphasis on wonderful)
system to keep files from being spread across multiple cylinder
groups, and more importantly, to keep one file from forcing others
to be spread across multiple cylinders.  To make this work well,
from 5 to 10% of the space on the file system is reserved.  The
actual amount depends on whether it was optimized for space or
speed (that changes the way things work).

Lastly, there might be bad blocks and blocks used to replace them
counted in the disk's label, and those can't be used for regular
storage (well, it would sort of defeat the purpose, if they were).
Anyhow, more than likely all of your disk is there, somewhere . . .

Later,
--
scott hess                      scott@gac.edu
Independent NeXT Developer	GAC Undergrad
<I still speak for nobody>
"Tried anarchy, once.  Found it had too many constraints . . ."
"I smoke the nose Lucifer . . . Bannana, banna."

zimmer@calvin.stanford.edu (Andrew Zimmerman) (03/05/91)

In article <SCOTT.91Mar4235920@erick.gac.edu> scott@erick.gac.edu (Scott Hess) writes:
>In article <B15ay2w164w@questor.wimsey.bc.ca> aberno@questor.wimsey.bc.ca (Anthony Berno) writes:
>   1) When you add the size of root (found by using the Inspector panel in the 
>   workspace) with the amount of free space on the disk, is it supposed to 
>   total the size of the disk? I am short by about 80 meg on a 660 meg disk. 

>A big culprit would be the version of du (or the du-like functionality)
>provided by the Workspace Inspector panel.  The default du(1) utility
>provided gives the disk usage in bytes.  This is misleading because
>few files require an integral number of blocks and fragments - most
>leave some free at the end of their frags.  Thus, on average, if the
>number of files in the system is n, the amount of data "lost" to
>the space at the end of fragments is n*frag_size/2.
>Later,
>--
>scott hess                      scott@gac.edu
>Independent NeXT Developer	GAC Undergrad

While what Scott answered was true, there is another factor which might be
causing some of the confusion.  The unix fs reserves 10% of the size of the
formatted disk to improve performance.  This would be about 66 Megs on the
660 Meg disk.
So, we have
790 - unformatted
660 - formatted
594 - unix file system

The following is an attempt to be helpful.  Since I have not tried this 
myself, please use at your own risk:

"You can tune a fish, and you can tune a file system"
    or
"How to nuke your filesystem"

                                                                  tunefs(8)

     Name
          tunefs - tune up an existing file system

     Syntax
          /etc/tunefs [ options ]

     Description
          The tunefs command is designed to change the dynamic parameters
          of a file system which affect the layout policies.  The parame-
          ters which are to be changed are indicated by the options listed
          below:

     Options

          -m minfree
                    This value specifies the percentage of space held back
                    from normal users; the minimum free space threshold.
                    The default value used is 10%.  This value can be set
                    to zero, however up to a factor of three in throughput
                    will be lost over the performance obtained at a 10%
                    threshold.  Note that if the value is raised above the
                    current usage level, users will be unable to allocate

     Restrictions
          This program should work on mounted and active file systems.
          Because the super-block is not kept in the buffer cache, the pro-
          gram will only take effect if it is run on dismounted file sys-
          tems.  If run on the root file system, the system must be
          rebooted.

I have left off all of the options other then the one that might have
a direct effect on the original problem.

Note:  Don't blame me if something goes wrong.  I have no reason to think
that something will go wrong, I'm just being careful.

Andrew
zimmer@calvin.stanford.edu

rpruess@umaxc.weeg.uiowa.edu (Rex Pruess) (03/06/91)

In article <B15ay2w164w@questor.wimsey.bc.ca> aberno@questor.wimsey.bc.ca (Anthony Berno) writes:
> Does anyone else have this problem with missing disk space? If you wouldn't 
> mind, please do the above calculation and tell me what you get. I'm curious 

Hmmm.  I'm curious now.

Looking at the mkfs and df output, there is a fair amount of space used
before you even get a chance to save files on the disk.  I assume mkfs
takes a good chunk of space for overhead (e.g., superblocks, inode
allocation, and reserved space for efficiency).

Here's a chopped down version of the mkfs output:
  /etc/mkfs -N /dev/rsd0a 357560 22 10 8192 1024 32 10 60 4096
  /dev/rsd0a: 357560 sectors in 1626 cylinders of 10 tracks, 22 sectors
  366.1Mb in 51 cyl groups (32 c/g, 7.21Mb/g, 1728 i/g)

The mkfs parm 357560 is the number of 1024 byte sectors.  This yields
a total space of:
             357,560 * 1024 = 366,141,440 bytes

The third to last mkfs parm is the minfree parm which is 10 in this case.
Thus, 10% of the disk is reserved.  See the mkfs man page for details.

Output from the df command follows:
  % df
  Filesystem      kbytes    used   avail capacity  Mounted on
  /dev/sd0a       345711  225040   86099    72%    /

Note that used+avail is 311139 or about 10% less than kbytes (345711).
This is due to the space reserved to allow the file system to work well.
See the df man page for details.  (I assume this 10% is a direct result
of the mkfs minsiz parm.)

The df output shows the partition has 354,008,064 bytes in it:
              345711 * 1024 = 354,008,064

The difference between mkfs & df is about 3%.  Again, I assume this is
the overhead for superblocks, inode tables, etc.
             366,141,440 (mkfs)
           - 354,008,064 (df)
             -----------
              12,133,376 (about 3% diff)
--
Rex Pruess, Weeg Computing Center, Univ of Iowa, Iowa City, IA 52242
rpruess@umaxc.weeg.uiowa.edu (NeXTmail)               (319) 335-5452

bennett@mp.cs.niu.edu (Scott Bennett) (03/06/91)

In article <RPRUESS.91Mar5104547@umaxc.weeg.uiowa.edu> rpruess@umaxc.weeg.uiowa.edu (Rex Pruess) writes:
>In article <B15ay2w164w@questor.wimsey.bc.ca> aberno@questor.wimsey.bc.ca (Anthony Berno) writes:
>> Does anyone else have this problem with missing disk space? If you wouldn't 
>> mind, please do the above calculation and tell me what you get. I'm curious 
>
>Hmmm.  I'm curious now.
>
>Looking at the mkfs and df output, there is a fair amount of space used
>before you even get a chance to save files on the disk.  I assume mkfs
>takes a good chunk of space for overhead (e.g., superblocks, inode
>allocation, and reserved space for efficiency).
>
>Here's a chopped down version of the mkfs output:
>  /etc/mkfs -N /dev/rsd0a 357560 22 10 8192 1024 32 10 60 4096
>  /dev/rsd0a: 357560 sectors in 1626 cylinders of 10 tracks, 22 sectors
>  366.1Mb in 51 cyl groups (32 c/g, 7.21Mb/g, 1728 i/g)
>
>The mkfs parm 357560 is the number of 1024 byte sectors.  This yields
>a total space of:
>             357,560 * 1024 = 366,141,440 bytes

     Worse yet, many (if not most) disk drive vendors incorrectly 
advertise the capacity of their products.  For example, in the above
case many vendors would advertise the drive as a 366MB drive.  This
is wrong because 1MB=1024KB=1048576B.  So, from this we can calculate
in reverse that 366141440B=357560KB=~349.18MB, *not* 366MB.  It would
be nice if the vendors would stick to straight info, rather than just
adding to the confusion.
>
>  [remainder of text deleted  --SJB]
>--
>Rex Pruess, Weeg Computing Center, Univ of Iowa, Iowa City, IA 52242
>rpruess@umaxc.weeg.uiowa.edu (NeXTmail)               (319) 335-5452


                                  Scott Bennett, Comm. ASMELG, CFIAG
                                  Systems Programming
                                  Northern Illinois University
                                  DeKalb, Illinois 60115
**********************************************************************
* Internet:       bennett@cs.niu.edu                                 *
* BITNET:         A01SJB1@NIU                                        *
*--------------------------------------------------------------------*
*  "I, however, place economy among the first and most important     *
*  of republican virtues, and public debt as the greatest of the     *
*  dangers to be feared."  --Thomas Jefferson                        *
**********************************************************************

marcus@eecs.cs.pdx.edu (Marcus Daniels) (03/23/91)

Is 20 meg definitely worth it?  What is the best way to watch swapping
activity?

Edit doesn't seem to search backwards correctly with regular
expressions.

Anyone using epoch?

------------------------------------------------------------------------------
- marcus@eecs.ee.pdx.edu / ....!uunet!tektronix!psueea!eecs!marcus
- "The power of accurate observation is called cynicism by those who
	don't have it"

fischer@iesd.auc.dk (Lars P. Fischer) (03/25/91)

>>>>> On 22 Mar 91 23:50:45 GMT, marcus@eecs.cs.pdx.edu (Marcus Daniels) said:

Marcus> Is 20 meg definitely worth it? 

Most Unix workstations run at 10-15% of their real performance, due to
too little RAM.

Marcus> What is the best way to watch swapping activity?

Try "man vmstat".

/Lars
--
Lars Fischer,  fischer@iesd.auc.dk   |Erst kommt das Fressen, dann die Moral
CS Dept., Univ. of Aalborg, DENMARK. |		- B. Brecht

madler@pooh.caltech.edu (Mark Adler) (03/25/91)

In article <FISCHER.91Mar24231722@galilei.iesd.auc.dk> fischer@iesd.auc.dk (Lars P. Fischer) writes:
>Try "man vmstat".

Actually, it's called vm_stat instead of vmstat on the NeXT.  And if you
don't have the extended release, obviously man vm_stat won't help much
either.  Just do vm_stat.  The output is pretty much self-explanatory,
if you're familiar with virtual memory techniques.

Mark Adler
madler@pooh.caltech.edu

zazula@uazhe0.physics.arizona.edu (RALPH ZAZULA) (03/26/91)

In article <FISCHER.91Mar24231722@galilei.iesd.auc.dk>, fischer@iesd.auc.dk (Lars P. Fischer) writes...
>>>>>> On 22 Mar 91 23:50:45 GMT, marcus@eecs.cs.pdx.edu (Marcus Daniels) said:
> 
>Marcus> Is 20 meg definitely worth it? 
> 
>Most Unix workstations run at 10-15% of their real performance, due to
>too little RAM.
> 
>Marcus> What is the best way to watch swapping activity?
> 
>Try "man vmstat".
> 

isn't it "man vm_stat"?  I think so since the command is vm_stat.
Also, the program Monitor (available at most NeXT archive sites
in the 1.0-release directory) shows a realtime display of swapping
activity.

>/Lars
>--
>Lars Fischer,  fischer@iesd.auc.dk   |Erst kommt das Fressen, dann die Moral
>CS Dept., Univ. of Aalborg, DENMARK. |		- B. Brecht
Ralph

   |----------------------------------------------------------------------|
   | Ralph Zazula                               "Computer Addict!"        |
   | University of Arizona                 ---  Department of Physics     |
   |   UAZHEP::ZAZULA                            (DecNet/HEPNet)          |
   |   zazula@uazhe0.physics.arizona.edu         (Internet)               |
   |----------------------------------------------------------------------|
   |   "You can twist perceptions, reality won't budge."  - Neil Peart    |
   |----------------------------------------------------------------------|

cs00jec@unccvax.uncc.edu (Jim Cain) (04/29/91)

I will be losing access to the net, ftp, etc. in the next week when my account
is deleted at the end of the semester. I have been ftp'ing stuff like mad --
about 20-25MB over the last two days (this is not fun at 1200 baud!! But that's
the only speed available on our school's network).

I have a few final questions to ask before I can no longer ask them or receive
the answers:

1.  UUCP: The school "does not have the resources" to give me UUCP access, so I
    must look elsewhere. Portal Communications offers UUCP access for $39.95
    per month plus $1.95 per hour. For tech support it is $95.95 (!) per month.
    Is this reasonable? Also, they offer the complete cnews package, including
    documentation, for $595. I know I can ftp this stuff, but with no support
    or documentation, what are my chances of getting it set up? I know a good
    bit about UNIX but have little experience with UUCP. What about Nutshell's
    book _Managing UUCP_? Is this enough to get it all set up?

    Alternatively, for $10 a month (plus $19.95 to open), Portal will give you
    a login to read news and mail from their system. This is much cheaper, but
    I'd rather do all this from NeXTstep.

    Some of the stuff I've downloaded are all the cnews sources, trn and rn
    from uunet; and NewsGrazer from purdue. (BTW, Grazer is fantastic!) Once
    I get the extended release, will I be able to compile this C code? Are
    there any libraries, etc. I should ftp now?

2.  I have noticed that when you Save in a Print... dialog, the resulting
    PostScript does not include the fonts used. Is there any way to have them
    included? I work at a service bureau, and this is a common way to have
    documents printed, especially from PC's since we do not support any PC
    applications. Without font data, the printer must have (an identical
    version) of the font already downloaded. I can't print my docs on our
    Lino's unless the fonts are included.

3.  Does the extended release include all the sources/binaries/etc. necessary
    to recompile the kernel? I would like to maintain disk quotas on my NeXT
    for my roommates/friends who have logins.

4.  Is there any good BBS software out there for UNIX or NeXTstep? A NeXTstep
    BBS sounds like a good project. Maybe I'll do it once I buy another hard
    disk and get the extended release installed.

5.  I recently saw references to UPS's but cannot find them again. Does anyone
    have any recommendations on manufacturers/models/dealers?

Well, that's all I can think of for now. Many thanks for any info.

Jim


-- 
=================== Jim Cain * cs00jec@unccvax.uncc.edu ===================
               The University of North Carolina at Charlotte
             College of Engineering * Dept of Computer Science
========== "'ave you been shopping?" * "No, I've been shopping." ==========