Dan Karron@UCBVAX.BERKELEY.EDU (06/12/91)
Here's the problem: Loading pixels from disk is limited by the speed that the disks can unload. Graphics performance can appear to suffer due to io limitations. Proposed solution: The operating system saves disk blocks in memory in some fashion, but purges them when memory gets tight. I have tried running some trivial program such as wc on image files prior to putting them up with ipaste. Now: How can I pin a disk block/page in memory because I know that I will be trying to open it at some time in the future. Then, How can I tag a block as free after I know I will not need it ? Are there any diagnostics to keep track of disk blocks in memory ? Any way to keep stats on cache hits/misses ? My application is to load into memory pixels that my user will probablly want to look at while he/she is still looking at the name of the pixel file. Then when the user decides that they want to look at it, it will already be in memory, and the delay while the disk unloads will have already passed. Cheers! dan. | karron@nyu.edu (e-mail alias ) Dan Karron, Research Associate | | Phone: 212 263 5210 Fax: 212 263 7190 New York University Medical Center | | 560 First Avenue Digital Pager <1> (212) 397 9330 | | New York, New York 10016 <2> 10896 <3> <your-number-here> |
blythe@sgi.com (David R. Blythe) (06/13/91)
In article <9106112151.AA24876@karron.med.nyu.edu> karron@cmcl2.nyu.edu writes: > >Here's the problem: Loading pixels from disk is limited by the speed that >the disks can unload. > >Graphics performance can appear to suffer due to io limitations. > >Proposed solution: The operating system saves disk blocks in memory >in some fashion, but purges them when memory gets tight. I have >tried running some trivial program such as wc on image files prior to >putting them up with ipaste. Its actually more complicated than that. The OS will definitely devote large hunks of memory to serve as disk cache, but it also must incorportate techniques so that the cache isn't polluted by doing sequential reads of large files (images?). The OS should also be attempting disk read ahead to try and reduce the latency in reading a file. Perhaps someone from the OS group could comment more on the IRIX stratgies and how a programmer can maximize its effectiveness. > >Now: How can I pin a disk block/page in memory because I know that I will >be trying to open it at some time in the future. Then, How can I tag >a block as free after I know I will not need it ? You can use the mpin() and munpin() system calls to lock portions of your virtual address space in memory (see mpin(2)). I have used the program movie a lot to view sequence of image files. Its reads them all into the virtual memory and then loops through them (it doesn't pin them). It works very well if the total size fits in physical memory and IRIX is very good about kicking everything out of memory that isn't necessary so you can get a substantial part of the memory (about 9 or 10 meg (perhaps a little more) on a 16M machine - graphics servers and the kernel consume the reset). However, if the total size is larger than physical memory the performance is pretty pathetic, the machine (4D/70GT) spends most of the time waiting for the swap transfers to complete and there doesn't appear to be any read ahead happening. It would be an interesting experiment to see of you could add calls to mpin()/munpin() to movie to see if it could be used to give the OS better council on what to page in and out. (i.e. LRU policies aren't always the best). [Or I could be out to lunch]. > >Are there any diagnostics to keep track of disk blocks in memory ? >Any way to keep stats on cache hits/misses ? the command osview (or gr_osview) can give you some idea of how the various subsystems are performing (number of reads and writes, cache performance ...) > >My application is to load into memory pixels that my user will probablly >want to look at while he/she is still looking at the name of the >pixel file. Then when the user decides that they want to look at it, it >will already be in memory, and the delay while the disk unloads will have >already passed. You can also achieve an interesting tradeoff with cpu time versus I/O latency by using compression. The run length encoding in the image library used by ipaste is probably a good example. The decompression is not too cpu intensive and the encoding can give good compression when there's a lot of coherency in the image thus reducing the size of the file and amount of I/O. On systems which can load pixels with dma or multiprocessor systems, you could arrange to do I/O, decompress, and write to the frame buffer with significant overlap with suitable use of multiple threads. On systems with only 1 processor and no pixel dma, you should be able to overlap the frame buffer write with the I/O read using two threads (never did get around to trying it ...). david blythe blythe@sgi.com