jdudeck@polyslo.CalPoly.EDU (John R. Dudeck) (02/12/90)
In article <2527@leah.Albany.Edu> emb978@leah.Albany.Edu (Eric M. Boehm) writes: > >I have had suggestions to try different configurations, which I have >done to a small extent. I am looking for ways to more rigorously measure >performance changes than something like "X *seems* slower" or "Y *seems* >faster". In other words, an objective way to measure the changes. The rule of thumb for disk cache size is: Make the cache as big as you can without taking away from memory that your program will use. If you want to try to do measurements, you probably will never get anything conclusive. The reason for this is the way in which a cache works. It holds the disk data that has been read, in the hopes that you will try to read the same data again, thus removing the need for a repeated disk access to that data. If you keep reading the same data over and over, the cache will always get a hit, and you will have fantastic speed improvements over no cache at all. If you read different data each time you read, there will be no improvement at all. It all depends on how much your work rereads the same data without intervening reads of other data. This can be determined easier on the back of an envelope than by doing benchmark measurements! -- John Dudeck "You want to read the code closely..." jdudeck@Polyslo.CalPoly.Edu -- C. Staley, in OS course, teaching ESL: 62013975 Tel: 805-545-9549 Tanenbaum's MINIX operating system.