[comp.unix.microport] V/AT2.3 Hard Disk File Size Limit?

markc@wpi.wpi.edu (Mark B. Cohen) (02/21/90)

My apologies if this has been covered before:  I only picked up
this group about six months ago. (When I got my V/AT)

I keep encountering what appears to be an intrinsic file
size limit on hard disk files:  Any attempt by any program
to write a file larger than 1,228,800 bytes fails.

Of course, 1228800 bytes is exactly 1200Kb.

The buglist -- excuse me... "open problems" -- listed in the
release notes for V/AT make no mention of a file size limit problem.

There is plenty of available space and inodes on either disk
(it's a two disk installation).  The csh limit command returns all
"unlimiteds".  Programs encountering this problem include
uPort's tar and cpio, as well as uncompress and Kermit.

Does anyone know of a fix/workaround/kernel patch/driver replacement/etc/etc??
If this is an existing "open problem" [ feature? :) ], should Microport be
notified?  Would it even be worthwhile to notify them if this is the case?

Thanks in advance,
Mark Cohen

P.S.  Does anyone else have a Leading Edge M-H?  Please write if you do.
-- 
Internet:  markc@wpi.wpi.edu                         "This is drugs...
UUCP:      uunet!wpi.wpi.edu!markc                    this is your brain...
BITnet:    markc@wpi.bitnet                           this is your breakfast."

pwilcox@paldn.UUCP (Peter McLeod Wilcox) (02/23/90)

In article <8815@wpi.wpi.edu>, markc@wpi.wpi.edu (Mark B. Cohen) writes:
> I keep encountering what appears to be an intrinsic file
> size limit on hard disk files:  Any attempt by any program
> to write a file larger than 1,228,800 bytes fails.

My experience is with uPort's SV386, but it may apply here.  One possibility
is the kernel patchable variable "Ulimit".  This specifies the maximum size
of a user writable file in 512 byte blocks.  In SV386 the default is 16k,
giving an 8meg file.  The default may have been set lower in SV/AT.  I find
it interesting that your limit is the same size as the floppy disk . . .
-- 
Pete Wilcox		...gatech!nanovx!techwood!paldn!pwilcox

" Maynard) (02/23/90)

In article <8815@wpi.wpi.edu> markc@wpi.wpi.edu (Mark B. Cohen) writes:
>I keep encountering what appears to be an intrinsic file
>size limit on hard disk files:  Any attempt by any program
>to write a file larger than 1,228,800 bytes fails.

This isn't a bug, it's a feature...officially, according to AT&T. Look
up 'ulimit' in the book.

Microport made it relatively simple to change:
As root, say 'patch /unix ulpatch 0x7fff'. This will raise the ulimit to
32K 512-byte blocks (I think...maybe it's 32K 1K-byte blocks). In any
case, the largest file on my system is in the 3.5 meg range, so that has
worked well for me.

General question: Is that number signed or unsigned? Can I get away with
0xffff, or will it cause problems?

-- 
Jay Maynard, EMT-P, K5ZC, PP-ASEL   | Never ascribe to malice that which can
jay@splut.conmicro.com       (eieio)| adequately be explained by stupidity.
{attctc,bellcore}!texbell!splut!jay +----------------------------------------
                             Free the DC-10!

billd@fps.com (Bill Davidson) (02/24/90)

In article <N9#+D&@splut.conmicro.com> jay@splut.conmicro.com (Jay "you ignorant splut!" Maynard) writes:
>As root, say 'patch /unix ulpatch 0x7fff'. This will raise the ulimit to
>32K 512-byte blocks (I think...maybe it's 32K 1K-byte blocks). In any
...
>General question: Is that number signed or unsigned? Can I get away with
>0xffff, or will it cause problems?

I believe I tried this a couple of years ago (memory is fading :-( ).
Anyway, it didn't work.  It's using signed.  I think this was with
2.2 so it may have changed in 2.3 or 2.4.  I haven't tried it more
recently because I really don't care.  The biggest file size I could
make it accept then was just under 16Meg.  Since I only need it to
go up to about 7Meg.  I never really cared to go higher.

--Bill Davidson

tore@motorola.se (Tore Fahlstroem) (02/27/90)

In article <8815@wpi.wpi.edu> markc@wpi.wpi.edu (Mark B. Cohen) writes:
>I keep encountering what appears to be an intrinsic file
>size limit on hard disk files:  Any attempt by any program
>to write a file larger than 1,228,800 bytes fails.

That is how SVR2 works.  Use ulimit in sh(1) to specify the file size limit
in number of blocks. (a block is 512 bytes).  Only root may raise the
ulimit.

You can also change the ulimit parameter in the kernel.