[comp.lang.perl] memory management in perl ?

stef@zweig.sun (Stephane Payrard) (08/07/90)

Writing an utility in perl dealing with a file, I can:
        1/ read the whole file in memory and process it
        2/ read the file by small chunks, process chunk before
           dealing with the next

1 allows me to be fast
2 allows me to treat a file whatever is length.


For any given file, I would like to try the stategy 1, and if it fails
to use 2.  How I can know that the 1 strategy has failed; how I can
handle such a failure to revert to strategy 2.  I use lot of big
memory consumer (gnu-emacs, various window system tools); So the
available memory can change a lot: I can't determine by advance what
is the maximum size for a file to be treated with the method 1.


        stef

--
Stephane Payrard -- stef@sun.com -- (415) 336 3726
Sun Microsystems -- 2550 Garcia Avenue --  M/S 10-09 -- Mountain View  CA 94043

                     
                     

lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) (08/07/90)

In article <STEF.90Aug6135109@zweig.sun> stef@zweig.sun (Stephane Payrard) writes:
: 
: Writing an utility in perl dealing with a file, I can:
:         1/ read the whole file in memory and process it
:         2/ read the file by small chunks, process chunk before
:            dealing with the next
: 
: 1 allows me to be fast
: 2 allows me to treat a file whatever is length.
: 
: 
: For any given file, I would like to try the stategy 1, and if it fails
: to use 2.  How I can know that the 1 strategy has failed; how I can
: handle such a failure to revert to strategy 2.  I use lot of big
: memory consumer (gnu-emacs, various window system tools); So the
: available memory can change a lot: I can't determine by advance what
: is the maximum size for a file to be treated with the method 1.

In general, slurping in the whole file ISN'T any faster than doing
it piece by piece.  It's just easier to say.  If you have any concerns
at all about running out of memory, do it a piece at a time.

In the situations where it actually IS faster to slurp, we might
consider making out-of-memory trappable by eval.  But I couldn't
really guarantee a clean recovery without some thinking time, which
I don't have right now.  Particularly on reallocing memory, I'd have
to make sure I didn't end up freeing invalid pointers.

Larry