roy@prism.gatech.EDU (Roy Mongiovi) (03/02/91)
Actually, I've found it immensely helpful to create a /cores directory so that all core dumps show up in one place. I've got a single user cube, so I just make it world writable and I can easily clean up after all the things that now seem to dump core in 2.0. It's a whole lot easier than having to do a find.... -- Roy J. Mongiovi Systems Support Specialist Office of Computing Services Georgia Institute of Technology Atlanta, Georgia 30332-0275 (404) 894-4660 uucp: ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!roy ARPA: roy@prism.gatech.edu
glenn@heaven.woodside.ca.us (Glenn Reid) (03/02/91)
In article <23137@hydra.gatech.EDU> roy@prism.gatech.EDU (Roy Mongiovi) writes: > Actually, I've found it immensely helpful to create a /cores directory > so that all core dumps show up in one place. I've found it even more helpful to keep my applications from dumping core at all when they crash. If you type the following thing into a Terminal window, it should keep core files from getting dumped anywhere: localhost> limit core 0 For some reason that I don't fully understand, even though the "limit" command is actually built into /bin/csh, this approach seems also to prevent NeXTstep applications from leaving large core files. Core files are only useful for debugging, and I'd rather not have them at all (especially when disk space is tight). I'm also curious why the core files are "ubiquitous". What software are you people running that dumps core so often? I rarely have anything crash on my system. Just curious. -- Glenn Reid RightBrain Software glenn@heaven.woodside.ca.us NeXT/PostScript developers ..{adobe,next}!heaven!glenn 415-851-1785 (fax 851-1470)
eps@toaster.SFSU.EDU (Eric P. Scott) (03/02/91)
In article <23137@hydra.gatech.EDU> roy@prism.gatech.EDU (Roy Mongiovi) writes: >Actually, I've found it immensely helpful to create a /cores directory >so that all core dumps show up in one place. I've got a single user >cube And if you had a NetBoot client ...? "It should have been /private/cores" -=EPS=-
rpm@sgi1.wag.caltech.edu (Richard P. Muller) (03/03/91)
In article <442@heaven.woodside.ca.us>, glenn@heaven (Glenn Reid) writes: >In article <23137@hydra.gatech.EDU> roy@prism.gatech.EDU (Roy Mongiovi) writes: >> Actually, I've found it immensely helpful to create a /cores directory >> so that all core dumps show up in one place. > >I've found it even more helpful to keep my applications from dumping core at >all when they crash. If you type the following thing into a Terminal window, >it should keep core files from getting dumped anywhere: > > localhost> limit core 0 > [Stuff deleted] I'm not familiar with the particulars of Mach, as I haven't yet gotten my NeXT, but I'll answer with what I know. Built into the csh (and maybe the Bourne shell?) is a command to limit the size of a corefile. By typing 'set limit core N' you can limit the corefile to N bytes. The advantage to this is that it saves diskspace and time (sometimes huge coredumps take forever...). The disadvantage is that very often important information lies at the end of a corefile. Limiting the size of the corefile will make it more difficult to analyze why the computer failed in the first place.
hardy@golem.ps.uci.edu (Meinhard E. Mayer (Hardy)) (03/03/91)
1. You can put the limit core 0 statement into your .login or .chsrc files. 2. In SysV Unix (which I use on my other machines) I use another trick: make an empty core file in the directory where you most often dump cores; set its protection to 0: ---------- 1 root golem 0 Jan 27 23:37 core This prevents the system from dumping cores in your directory. 3. Use cron to find and remove nonempty cores once a week. Hardy -------****------- Meinhard E. Mayer (Prof.) Department of Physics, University of California Irvine CA 92717;(714) 856 5543; hardy@golem.ps.uci.edu or MMAYER@UCI.BITNET