[comp.sys.amiga.tech] DMA in VM

ckp@grebyn.com (Checkpoint Technologies) (11/29/89)

	In the great DMA vs non-DMA debate, there is one question which
still nags at me. When AmigaDOS sprouts virtual memory, and there is a
possibility that a disk buffer may be scattered in several discontiguous
physical memory pages, which of the available disk controllers will be
able to handle this kind of buffering effectively? The answer, I fear,
is "the non-DMA ones".

	Do any of the DMA controllers have the ability to support
a discontiguous IO buffer? I know the Commodore A2090 can't. I don't
know about the A2091 or the A590. The HardFrame may; I know it uses
the Motorola 68430 DMA chip, and I also know that it's big brother
the 68450 4-channel DMA chip supports chained DMA operations, but I
don't know if Motorola endowed the 68430 with this feature.

	If the DMA controller hardware can't handle discontiguous IO
buffers, then they must DMA into a contiguous buffer and CPU-copy the
results to the task buffer. This would be a big win for non-DMA
controllers, which would not need this intermediate RAM buffer.

	Oh, and it would be a dirty shame if programs weren't able to
allocate virtual memory for their disk buffers, IMHO. It would make
virtual memory less useful.

jwright@atanasoff.cs.iastate.edu (Jim Wright) (11/30/89)

ckp@grebyn.UUCP (Checkpoint Technologies) writes:
| 	Oh, and it would be a dirty shame if programs weren't able to
| allocate virtual memory for their disk buffers, IMHO. It would make
| virtual memory less useful.

This is as good as the one posted a while ago...

	"I so glad I bought the virtual memory addon for my
	Mac.  Now I'm gonna make a *really* big RAM disk with it."

-- 
Jim Wright
jwright@atanasoff.cs.iastate.edu

ckp@grebyn.com (Checkpoint Technologies) (12/01/89)

In article <2054@atanasoff.cs.iastate.edu> jwright@atanasoff.cs.iastate.edu (Jim Wright) writes:
>ckp@grebyn.UUCP (Checkpoint Technologies) writes:
>| 	Oh, and it would be a dirty shame if programs weren't able to
>| allocate virtual memory for their disk buffers, IMHO. It would make
>| virtual memory less useful.
>
>This is as good as the one posted a while ago...
>
>	"I so glad I bought the virtual memory addon for my
>	Mac.  Now I'm gonna make a *really* big RAM disk with it."
>
Laugh if you like. In VAX/VMS, every part of a process is virtual
memory, including any IO buffering. The IO system takes care of ensuring
that such pages are resident and locked for the duration of a DMA IO
transfer.

Consider this: An application reads a project into memory, works on it
for a while, then writes it back out. The reading and writing part would
be best served by large IO buffers, to take best advantage of the fast
file system. If allocated in virtual memory, these buffers can be paged
out when not in use, to allow best use of physical memory for the
project at hand. And further, when resident, they need not be physically
contiguous, which makes most efficient use of physical memory.

daveh@cbmvax.UUCP (Dave Haynie) (12/01/89)

in article <14059@grebyn.com>, ckp@grebyn.com (Checkpoint Technologies) says:

> 	In the great DMA vs non-DMA debate, there is one question which
> still nags at me. When AmigaDOS sprouts virtual memory, and there is a
> possibility that a disk buffer may be scattered in several discontiguous
> physical memory pages, which of the available disk controllers will be
> able to handle this kind of buffering effectively? The answer, I fear,
> is "the non-DMA ones".

There's an awful good chance both kinds of controllers would need revised
device drivers.  The DMA controllers certainly will.

> 	Do any of the DMA controllers have the ability to support
> a discontiguous IO buffer? I know the Commodore A2090 can't. 

Do you know of any way to ask the A2090, or any other controller,
to DMA into a discontiguous I/O buffer?  Of course not!  This only
becomes an issue when virtual memory makes this possible, and it's
handled by the device driver -- nothing above that cares about the
difference.

> 	If the DMA controller hardware can't handle discontiguous IO
> buffers, then they must DMA into a contiguous buffer and CPU-copy the
> results to the task buffer. 

Certainly not!  What the DMA _driver_ has to do is call the 
Virtual2Physical() system call, or whatever they use for this kind of
support, and get back the physical page(s) that correspond to that
one virtual block.  Then the _driver_ sets up the controller to DMA
directly into each physical block.  Obviously if there are several
small blocks, this will take a little longer than one big block, but
it's no tragedy; it'll still be fast.

-- 
Dave Haynie Commodore-Amiga (Systems Engineering) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
                    Too much of everything is just enough

vinsci@ra.abo.fi (Leonard Norrgard) (12/01/89)

>>	"I so glad I bought the virtual memory addon for my
>>	Mac.  Now I'm gonna make a *really* big RAM disk with it."
>>
>Laugh if you like. In VAX/VMS, every part of a process is virtual
>memory, including any IO buffering. The IO system takes care of ensuring
>that such pages are resident and locked for the duration of a DMA IO
>transfer.

  Not necessarily. But unless you have done systems programming under
VAX/VMS, it is unlikely that you ever have seen anything else than
virtual memory.

-- Leonard	"Par CU is"


--
Leonard Norrgard, vinsci@ra.abo.fi, vinsci@finabo.bitnet, +358-21-6375762, EET.

jms@tardis.Tymnet.COM (Joe Smith) (12/01/89)

In article <14059@grebyn.com> ckp@grebyn.UUCP (Checkpoint Technologies) writes:
>	If the DMA controller hardware can't handle discontiguous IO
>buffers, then they must DMA into a contiguous buffer and CPU-copy the
>results to the task buffer. This would be a big win for non-DMA
>controllers, which would not need this intermediate RAM buffer.

There's an alternative you've overlooked.  Add another MMU just for DMA.  A
single MMU could be shared amoung several DMA devices and let them all think
they were doing I/O to contiguous addresses.  There's no need to bother the
CPU or need contiguous RAM if you can fake it.  (The VAX and other systems
do just this.)
-- 
Joe Smith (408)922-6220 | SMTP: JMS@F74.TYMNET.COM or jms@gemini.tymnet.com
BT Tymnet Net Tech Serv | UUCP: ...!{ames,pyramid}!oliveb!tymix!tardis!jms
PO Box 49019, MS-D21    | PDP-10 support: My car's license plate is "POPJ P,"
San Jose, CA 95161-9019 | humorous dislaimer: "My Amiga speaks for me."

jms@tardis.Tymnet.COM (Joe Smith) (12/01/89)

In article <14059@grebyn.com> ckp@grebyn.UUCP (Checkpoint Technologies) writes:
>	Oh, and it would be a dirty shame if programs weren't able to
>allocate virtual memory for their disk buffers, IMHO. It would make
>virtual memory less useful.

Although it makes it easer for lazy programmers, that would cause more
system overhead.

On a mediocre implementation of VM, the OS simply provides virtual memory
so that programs think they are running in a lot of real memory and cannot
tell that they are paging to the swapping disk.

On a good implementation of VM, the OS provides support so that program can
be fully aware of paging to and from the disk.  In this case, programs do
not allocate virtual memory for disk buffers.  Instead, they allocate virtual
pages to be mapped to disk pages.

Instead of telling the OS to read some blocks from the disk into a buffer
that starts on an arbitrary byte boundary (and may span several pages),
the better programs tell the OS associate a page of disk blocks to a
chunk of virtual memory that starts (and ends) on a page boundary.
If some other task has that disk page already mapped into physical memory,
the OS does not have to re-read those blocks.  It simply changes your task's
page map to point to what has already been read in.

A program that does not co-operate with VM would:
  1) allocate a large disk buffer
  2) open the file
  3) read the entire file into the buffer
  4) search through the buffer, modifying some bytes as needed
  5) write the buffer out to the file
  6) close the file.

But if you have allocate a disk buffer that is bigger that available
physical memory, the system will have to:
  A) write some other portion of virtual memory out to the swapping disk
     to free up a chunk of physical memory
  B) read the next chunk of data in from the file to physical memory
  C) write out this chunk to the swapping disk
  D) repeat steps B and C until the entire buffer has been filled
  E) swap in the code portion of the program that went out in step 1
  F) swap in bits and pieces of the giant buffer as the program accesses it.
  G) reverse steps B and C when the buffer is written back out

On a VM system like TOPS-20 or TYMCOM-X, the steps above would be:
  1) allocate a range of virtual pages to be mapped to the file
  2) open the file and have it mapped read+write into virtual memory
  3) search through virtual memory, modifying some bytes as needed
  4) close the file.

and the OS would:
  A) trap the page fault when the program attempts to access a page not
     currently in memory
  B) write modified pages back to the disk file (since we asked for this)
  C) read the appropriate page in from the disk file.  (no I/O to swap disk)

Summary: A good implementation of VM makes the concept of "disk buffers"
obsolete.  Existing programs still work, but don't get the same performance
as programs that have been rewritten to take advantage of the VM system.

-- 
Joe Smith (408)922-6220 | SMTP: JMS@F74.TYMNET.COM or jms@gemini.tymnet.com
BT Tymnet Net Tech Serv | UUCP: ...!{ames,pyramid}!oliveb!tymix!tardis!jms
PO Box 49019, MS-D21    | PDP-10 support: My car's license plate is "POPJ P,"
San Jose, CA 95161-9019 | humorous dislaimer: "My Amiga speaks for me."

peter@sugar.hackercorp.com (Peter da Silva) (12/01/89)

In article <14060@grebyn.com> ckp@grebyn.UUCP (Checkpoint Technologies) writes:
> Laugh if you like. In VAX/VMS, every part of a process is virtual
> memory, including any IO buffering. The IO system takes care of ensuring
> that such pages are resident and locked for the duration of a DMA IO
> transfer.

Too bad it doesn't go one step more and make the files *really* mapped, so
they "page" back into the real file they're supposed to come from.

Really, I don't see the point of writing stuff to one part of the disk when
you're just going to have to read it in again and write it to another part
of the disk later on.

This was a feature of Multics some, oh, 20 years ago now. ALL files were mapped
into memory... paged to and from the physical disk on demand.
-- 
Peter "Have you hugged your wolf today" da Silva <peter@sugar.hackercorp.com>
`-_-'
 'U` "Really, a video game is nothing more than a Skinner box."
       -- Peter Merel <pete@basser.oz>

doug@xdos.UUCP (Doug Merritt) (12/02/89)

In article <4643@sugar.hackercorp.com> peter@sugar.hackercorp.com (Peter da Silva) writes:
>This was a feature of Multics some, oh, 20 years ago now. ALL files were mapped
>into memory... paged to and from the physical disk on demand.

As you probably know, other systems since then have also used it. And I've
heard that Unix 5.4 will support this feature, too. (I guess this is
the "it's in there!" release of Unix :-)
	Doug
-- 
Doug Merritt		{pyramid,apple}!xdos!doug
Member, Crusaders for a Better Tomorrow		Professional Wildeyed Visionary

ckp@grebyn.com (Checkpoint Technologies) (12/02/89)

In article <8778@cbmvax.UUCP> daveh@cbmvax.UUCP (Dave Haynie) writes:
>
>There's an awful good chance both kinds of controllers would need revised
>device drivers.  The DMA controllers certainly will.
>
	Surely all the current hard disk controllers can
be made to support DMA with the right drivers. What I mean is that DMA
won't be as valuable.

>What the DMA _driver_ has to do is call the 
>Virtual2Physical() system call, or whatever they use for this kind of
>support, and get back the physical page(s) that correspond to that
>one virtual block.  Then the _driver_ sets up the controller to DMA
>directly into each physical block.  Obviously if there are several
>small blocks, this will take a little longer than one big block, but
>it's no tragedy; it'll still be fast.

	One of the better ways to slow down a transfer, is to break a
large transfer into a buncha smaller ones; then you have the host CPU
racing to keep up with the spin of the disk drive. Likely you'll have to
resort to interleaving greater than 1:1.

	As long as you can be sure that a single physical disk sector
will always be within a physical memory page, then one good translated
DMA will do fine. More likely, however, is that some of the disk sectors
would span the gap between physical memory pages. Then, without hardware
to handle a discontiguous DMA transfer, you'll have to DMA into a
different contiguous memory area and copy the results.

ckp@grebyn.com (Checkpoint Technologies) (12/02/89)

In article <VINSCI.89Dec1030202@ra.abo.fi> vinsci@ra.abo.fi (Leonard Norrgard) writes:
>>Laugh if you like. In VAX/VMS, every part of a process is virtual
>>memory, including any IO buffering. The IO system takes care of ensuring
>>that such pages are resident and locked for the duration of a DMA IO
>>transfer.
>
>  Not necessarily. But unless you have done systems programming under
>VAX/VMS, it is unlikely that you ever have seen anything else than
>virtual memory.

	Well, as it happens, I have written a VMS DMA device driver... I
think that qualifies as 'systems programming'...

ckp@grebyn.com (Checkpoint Technologies) (12/02/89)

In article <4643@sugar.hackercorp.com> peter@sugar.hackercorp.com (Peter da Silva) writes:
>
>Really, I don't see the point of writing stuff to one part of the disk when
>you're just going to have to read it in again and write it to another part
>of the disk later on.
>
	Well put. Virtual memory actually is at it's best when it's
*not* *being* *used*, or in other words, there's just no good substitute
for real memory.

ckp@grebyn.com (Checkpoint Technologies) (12/02/89)

In article <14064@grebyn.com> ckp@grebyn.UUCP (Checkpoint Technologies) writes:
>	Surely all the current hard disk controllers can
>be made to support DMA with the right drivers. What I mean is that DMA
>won't be as valuable.

	Oops, slip if the fingers - I certainy don't mean that all
current disk controllers can be made to support DMA! What I meant was
"...all the current hard disk controllers can be made to support VM.."

waggoner@dtg.nsc.com (Mark Waggoner) (12/02/89)

In article <4643@sugar.hackercorp.com> peter@sugar.hackercorp.com (Peter da Silva) writes:
>In article <14060@grebyn.com> ckp@grebyn.UUCP (Checkpoint Technologies) writes:
>> Laugh if you like. In VAX/VMS, every part of a process is virtual
>> memory, including any IO buffering. The IO system takes care of ensuring
>> that such pages are resident and locked for the duration of a DMA IO
>> transfer.
>
>Too bad it doesn't go one step more and make the files *really* mapped, so
>they "page" back into the real file they're supposed to come from.
>
>Really, I don't see the point of writing stuff to one part of the disk when
>you're just going to have to read it in again and write it to another part
>of the disk later on.
>
>This was a feature of Multics some, oh, 20 years ago now. ALL files were mapped
>into memory... paged to and from the physical disk on demand.

If you had a fast disk that you were using for your virtual memory 
swapping and a somewhat slower disk you were using for your
main storage it would make sense to move it from the storage disk to
the virtual memory disk.  You might, for example, have a large optical
WORM type drive with lots of data on it and a smaller, but faster hard
drive that you used just for VM.  You also wouldn't want to map the
buffers for a floppy drive back on to the floppy that they came from.




Stupid extra lines for rn.



-- 
 ,------------------------------------------------------------------.
|  Mark Waggoner   (408) 721-6306           waggoner@dtg.nsc.com     |
 `------------------------------------------------------------------'

ckp@grebyn.com (Checkpoint Technologies) (12/02/89)

In article <838@tardis.Tymnet.COM> jms@tardis.Tymnet.COM (Joe Smith) writes:
>There's an alternative you've overlooked.  Add another MMU just for DMA.  A
>single MMU could be shared amoung several DMA devices and let them all think
>they were doing I/O to contiguous addresses.  There's no need to bother the
>CPU or need contiguous RAM if you can fake it.  (The VAX and other systems
>do just this.)

	In fact, I consider this just another method to allow DMA to
discontiguous physical memory, so I didn't mention it separately. It
happens that the 68851 PMMU, as used in the A2620 card, has just that
feature: a separate page table for a DMA device. It may be remotely
possible to press this into service in the Amiga, but I doubt it.
However, the 68030's built-in PMMU doesn't have this feature, so we may
as well forget it since we all aspire to 68030's and 68040's anyway.

	Yep, VAX systems have a 'MMU' of sorts (they call it the Unibus
Mapping Registers) for use by the IO system.  It has the dual role of
allowing DMA to discontiguous pages, and expanding the 256K address
space of the IO bus into the much-larger (up to) 256 Meg of CPU
address space.

schow@bcarh185.bnr.ca (Stanley T.H. Chow) (12/03/89)

In article <8778@cbmvax.UUCP> daveh@cbmvax.UUCP (Dave Haynie) writes:
>Do you know of any way to ask the A2090, or any other controller,
>to DMA into a discontiguous I/O buffer?  Of course not!   [...]   

I do, I do. I have a DMA controller on my Amiga-1000 that can do DMA
to discontiguous regions. [This is the controller from Side Effects
that never got to market.]

In fact, all the smarts is in the AMD disk controller chip, we just
added the DMA interface.

Stanley Chow        BitNet:  schow@BNR.CA
BNR		    UUCP:    ..!psuvax1!BNR.CA.bitnet!schow
(613) 763-2831		     ..!utgpu!bnr-vpa!bnr-rsc!schow%bcarh185
Me? Represent other people? Don't make them laugh so hard.

valentin@cbmvax.UUCP (Valentin Pepelea) (12/03/89)

In article <14068@grebyn.com> ckp@grebyn.UUCP (Checkpoint Technologies) writes:
>
>It happens that the 68851 PMMU, as used in the A2620 card, has just that
>feature: a separate page table for a DMA device. It may be remotely
>possible to press this into service in the Amiga, but I doubt it.

The 68851 MMU resides between the main processor and memory. If a DMA device
wanted to access the memory through the MMU, it would have to be sitting on the
same side as the processor. Obviously impossible.

>However, the 68030's built-in PMMU doesn't have this feature, ...

For obvious reasons.

Valentin
-- 
The Goddess of democracy? "The tyrants     Name:    Valentin Pepelea
may distroy a statue,  but they cannot     Phone:   (215) 431-9327
kill a god."                               UseNet:  cbmvax!valentin@uunet.uu.net
             - Ancient Chinese Proverb     Claimer: I not Commodore spokesman be

33014-18@sjsumcs.sjsu.edu (Eduardo Horvath) (12/04/89)

In article <839@tardis.Tymnet.COM> jms@tardis.Tymnet.COM (Joe Smith) writes:
>In article <14059@grebyn.com> ckp@grebyn.UUCP (Checkpoint Technologies) writes:
>>	Oh, and it would be a dirty shame if programs weren't able to
>>allocate virtual memory for their disk buffers, IMHO. It would make
>>virtual memory less useful.

	My turn to put in my $.02.  What do you need virtual disk buffers for?
I thought that the whole idea of DMA was to read data directly where it was
needed and do byt twiddling there, instead of reading into buffers and
copying it, like non-dma.  Swapping disk buffers in and out of RAM is a 
waste of bandwidth.  You need buffers to read and write the buffers from!

[...]

>On a good implementation of VM, the OS provides support so that program can
>be fully aware of paging to and from the disk.  In this case, programs do
>not allocate virtual memory for disk buffers.  Instead, they allocate virtual
>pages to be mapped to disk pages.

Nice idea, but wouldn't it be a little complicated?  Wouldn't you need to
design your OS from scratch in such a way as to make files and virtual RAM
identical?

[...]

>Summary: A good implementation of VM makes the concept of "disk buffers"
>obsolete.

I heartily agree.

>Existing programs still work, but don't get the same performance
>as programs that have been rewritten to take advantage of the VM system.

I think the OS should be handling this mess.  The language you are writing
in should not need to deal with buffers at all, or if the language needs
buffers, like the stdio library, they should be allocated from mappable
memory.  But I don't think that the stdio library should be part of the
language.  The OS has the best knowledge about how to optimize disk IO,
so it should handle the buffering.

>
>-- 
>Joe Smith (408)922-6220 | SMTP: JMS@F74.TYMNET.COM or jms@gemini.tymnet.com
>BT Tymnet Net Tech Serv | UUCP: ...!{ames,pyramid}!oliveb!tymix!tardis!jms
>PO Box 49019, MS-D21    | PDP-10 support: My car's license plate is "POPJ P,"
>San Jose, CA 95161-9019 | humorous dislaimer: "My Amiga speaks for me."

Try:  33014-18@sjsumcs.SJSU.EDU	|	Disclaimer: 
Eduardo Horvath			|	I have no idea what I'm talking about

doug@xdos.UUCP (Doug Merritt) (12/06/89)

In article <1989Dec4.154624.22658@sjsumcs.sjsu.edu> 33014-18@sjsumcs.SJSU.EDU (Eduardo Horvath) writes:
>  But I don't think that the stdio library should be part of the
>language.  The OS has the best knowledge about how to optimize disk IO,
>so it should handle the buffering.

You mean the stdio library *implementation* shouldn't be part of the
language. It has been quite advantageous to C that the stdio library
*interface definition* has been part of the language. And it can hide
memory mapped files pretty easily; I've implemented such myself in the
last year.

(C used to use the "portable i/o library", which was much less clean
in many ways. Some language lawyers claim that these libraries are
not part of C itself, but to my mind this is useless philosophizing.
For all practical purposes, when the stdio library replaced the portable
i/o library, the C language itself was changed fairly radically.)
	Doug
-- 
Doug Merritt		{pyramid,apple}!xdos!doug
Member, Crusaders for a Better Tomorrow		Professional Wildeyed Visionary

daveh@cbmvax.UUCP (Dave Haynie) (12/06/89)

in article <14064@grebyn.com>, ckp@grebyn.com (Checkpoint Technologies) says:

> In article <8778@cbmvax.UUCP> daveh@cbmvax.UUCP (Dave Haynie) writes:

>>There's an awful good chance both kinds of controllers would need revised
>>device drivers.  The DMA controllers certainly will.

> 	Surely all the current hard disk controllers can
> be made to support DMA with the right drivers. What I mean is that DMA
> won't be as valuable.

Huh?  DMA is defined by the controller's hardware.  If that hardware
doesn't support DMA, no software in the world is going to make it do
so.  Period.

There are certain situations where DMA may not be as valuable.  With the
current system and certain accelerator boards, 32 bit memory isn't 
accessable by the expansion bus DMA (the CSA boards are an example --
they locate their 32 bit memory outside of the controllers' address
space).  In such a case, a non-DMA controller may very well be more
efficient.  Which is why Commodore-Amiga accelerator boards support
DMA into their 32 bit memory.

> 	As long as you can be sure that a single physical disk sector
> will always be within a physical memory page, then one good translated
> DMA will do fine. More likely, however, is that some of the disk sectors
> would span the gap between physical memory pages. Then, without hardware
> to handle a discontiguous DMA transfer, you'll have to DMA into a
> different contiguous memory area and copy the results.

With virtual memory, you certainly can't guarantee contiguous buffer 
memory.  And with current alignment restrictions, you can't guarantee 
that blocks line up with physical pages in all cases.  Though in reality, 
you probably can, since it's the filesystem that makes most of the 
allocations for you, and it can certainly know about virtual memory 
and page alignments without any applications having to know.  If you're
loading programs across page boundaries, you're always going to be
slowing things down, since that'll increase paging activity considerably.
A slight slowing of DMA at that point will be the least of your worries
(MMU pages are likely to be 4k or 8k in size, vs. the 512 bytes of a 
disk blocks, and it's unlikely that memory will be so fragmented that
you'd often get disk pages unevenly crossing MMU page groups, even with
the current allocation scheme).

-- 
Dave Haynie Commodore-Amiga (Systems Engineering) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
                    Too much of everything is just enough

w-edwinh@microsoft.UUCP (Edwin Hoogerbeets) (12/09/89)

In article <8800@cbmvax.UUCP> valentin@cbmvax.UUCP (Valentin Pepelea) writes:
>The 68851 MMU resides between the main processor and memory. If a DMA device
>wanted to access the memory through the MMU, it would have to be sitting on the
>same side as the processor. Obviously impossible.

Which side of the MMU are the custom chips? 

If they are on the same side as the processor, does this mean it will
obviate the distinction between chip and fast RAM, since the Meg or so
that these chips could access could be remapped to be anything? That
would be neat.

Edwin

jms@doctor.Tymnet.COM (Joe Smith) (12/09/89)

>In article <839@tardis.Tymnet.COM> jms@tardis.Tymnet.COM (Joe Smith) writes:
>>On a good implementation of VM, the OS provides support so that program can
>>be fully aware of paging to and from the disk.  In this case, programs do
>>not allocate virtual memory for disk buffers.  Instead, they allocate virtual
>>pages to be mapped to disk pages.

In article <1989Dec4.154624.22658@sjsumcs.sjsu.edu> 33014-18@sjsumcs.SJSU.EDU (Eduardo Horvath) writes:
>Nice idea, but wouldn't it be a little complicated?  Wouldn't you need to
>design your OS from scratch in such a way as to make files and virtual RAM
>identical?

It can be hacked into an existing OS.  When Tymshare when from TOPS-10 to
TYMCOM-X, they defined a new format for the disk file structure that used
pages instead of blocks.  (A page was defined to be 4 blocks, 2048 bytes.)
All files start on a page boundary.  The file information block is a full
page; big files have pointers to indirect pages.  The bit map is done
by pages.  Directories start out as 7 pages long.  (To find a particular
file, convert the name to an integer, divide by 7, use the remainder to
select one of the 7 pages, do a linear search for the name in that page,
follow the linked list pointer if more not found.  Sort of a nice
compromise between the strictly linear search for most OS's and the
strictly hashed search for AmigaDOS.)

To do I/O to blocks, a module called SIMIO was added to the kernel.  When
a program asked to read in a single block, SIMIO would trap the call,
read in the appropriate page, and blit the data into the user's disk buffer.
For writing one block, SIMIO would read in the appropriate page (only if
it was not already in physical memory), blit the data from the user's disk
buffer into the page, and mark the page as being "dirty".  The general
paging routines would eventually write the page back to disk.

This way all the old programs continued to work, and newer programs could
do mapping calls directly and avoid the context switches to and from SIMIO.
Of course the core routines that allocate physical memory and schedule
disk I/O had to be rewritten, but the majority of the OS routines were
unchanged.

In summary: It is possible to transparently add memory mapped paged disk
I/O to an Operating System.
-- 
Joe Smith (408)922-6220 | SMTP: JMS@F74.TYMNET.COM or jms@gemini.tymnet.com
BT Tymnet Tech Services | UUCP: ...!{ames,pyramid}!oliveb!tymix!tardis!jms
PO Box 49019, MS-D21    | PDP-10 support: My car's license plate is "POPJ P,"
San Jose, CA 95161-9019 | humorous dislaimer: "My Amiga speaks for me."

valentin@cbmvax.UUCP (Valentin Pepelea) (12/09/89)

In article <9383@microsoft.UUCP> w-edwinh@microsoft.UUCP (Edwin Hoogerbeets) writes:
>In article <8800@cbmvax.UUCP> valentin@cbmvax.UUCP (Valentin Pepelea) writes:
>>The 68851 MMU resides between the main processor and memory. If a DMA device
>>wanted to access the memory through the MMU, it would have to be sitting on the
>>same side as the processor. Obviously impossible.
>
>Which side of the MMU are the custom chips? 

There is no MMU in the Amiga 500 & 1000, so obviously on the A2620 and A2630
accelerators, the MMU filters only the CPU's memory accesses.

>If they are on the same side as the processor, does this mean it will
>obviate the distinction between chip and fast RAM, since the Meg or so
>that these chips could access could be remapped to be anything? That
>would be neat.

Actually, that would be disastrous. Even if a bright engineer could figure out
how to do that, the overhead introduced by the MMU would render things very
slow. More like frozen. If you want more CHIP ram, it is easier to make that
chip access more ram rather than building or externally attaching an MMU to it.

Valentin
-- 
The Goddess of democracy? "The tyrants     Name:    Valentin Pepelea
may distroy a statue,  but they cannot     Phone:   (215) 431-9327
kill a god."                               UseNet:  cbmvax!valentin@uunet.uu.net
             - Ancient Chinese Proverb     Claimer: I not Commodore spokesman be