dce@Solbourne.COM (David Elliott) (06/04/89)
One of the more interesting areas I've had discussions with people about is the idea of being able to advise the system on how memory will be used. Programmers can often predict what areas of a program or chunks of data space are going to be used more often than others, yet there is generally no way for the programmer to say that a chunk of code is or is not needed often, or is only called at a point at which speed is not critical. In one case, a developer described a case in which a large array was being scanned a number of times in both row-major and column-major order. He noticed that the slowdown came in the latter case since the OS was paging his data in an inefficient manner. Some of the improvements are not dependent upon the OS so much. I know that the folks at MIPS developed a program that would reorganize code for maximum cache speed. -- David Elliott dce@Solbourne.COM ...!{boulder,nbires,sun}!stan!dce
poser@csli.Stanford.EDU (Bill Poser) (06/04/89)
HP-UX, the version of UNIX on HP 9000/3x0 machines, actually implements a memadvise system call that allows the program to tell the kernel what its memory usage pattern is expected to be. I'd be curious to know if any studies of the effect on performance have been done.
andrew@frip.WV.TEK.COM (Andrew Klossner) (06/06/89)
[] "Programmers can often predict what areas of a program or chunks of data space are going to be used more often than others, yet there is generally no way for the programmer to say that a chunk of code is or is not needed often, or is only called at a point at which speed is not critical." Some research in the 1970s, when demand paging was new (to many of us), suggested that programmers cannot do as good a job as they think they can in predicting memory access behavior. In general, totally uninformed, dynamic LRU memory migration was found to win over extensively tweaked, programmed memory migration. (This was reported in a CACM in about mid-decade; sorry I can't quote chapter and verse from memory.) "In one case, a developer described a case in which a large array was being scanned a number of times in both row-major and column-major order. He noticed that the slowdown came in the latter case since the OS was paging his data in an inefficient manner." Well, sure; in column-major order (assuming a non-Fortran language), the program is stepping to a new page much more often than in row-major order. If a single row is a page or more in size, column-major order will touch a new page with every single index iteration. No amount of memory advise to the OS can mitigate the result of this pathological behavior. -=- Andrew Klossner (uunet!tektronix!orca!frip!andrew) [UUCP] (andrew%frip.wv.tek.com@relay.cs.net) [ARPA]
rang@cpsin3.cps.msu.edu (Anton Rang) (06/07/89)
In article <3496@orca.WV.TEK.COM> andrew@frip.WV.TEK.COM (Andrew Klossner) writes: > [ stuff about row-major vs. column-major order ] > >Well, sure; in column-major order (assuming a non-Fortran language), >the program is stepping to a new page much more often than in row-major >order. If a single row is a page or more in size, column-major order >will touch a new page with every single index iteration. No amount of >memory advice to the OS can mitigate the result of this pathological >behavior. How about asking for a LIFO paging strategy on that section of memory? +---------------------------+------------------------+ | Anton Rang (grad student) | "VMS Forever!" | | Michigan State University | rang@cpswh.cps.msu.edu | +---------------------------+------------------------+