shepard@finch.pa.dec.com (Mark Shepard) (09/03/90)
I'm curious about the tradeoffs involved in implimenting DMA. Current DMA controller chips, such as intel's 8237 or even the mc68450, seem ill-suited to use in 32-bit microprocessor systems. Not only do the chips interface poorly to the more complex busses (such as the 68030's with dynamic bus sizing), but the chips may actually be slower than the CPU+cache! Are designers moving away from standard DMA chips, and instead designing custom DMA logic into ASICs for each system? Can anyone describe the DMA architecture of pc systems such as the Mac (or MacII) and Amiga, and high-end workstations such as Apollo and HP? In the specific case of a SCSI controller (a 5380 or DP8490) in a 68030 or 040 system (at 25MHz say), how seriously will NOT using a DMA controller (ie device which can become a bus master and transfer data quickly) affect system performance? It seems that the most serious hit would be in the context switch (saving/restoring 68030 registers) to service interrupts required to process each SCSI command. Even if a DMA controller is used to transfer SCSI data to/from memory, to perform a single SCSI transaction (ie reading a specific chuck of data into memory) requires several SCSI commands, each of which needs to be serviced by the main cpu. Can someone with experience with SCSI comment on how significant this overhead is? I believe NCR is addressing this last point with "smart" SCSI controllers which are able to handle entire transactions w/o cpu intervention. Can anyone who has used these chips give me a better description? Finally (just for the record), a Motorola application note suggests using a dedicated 68020 (plus some buffers and control logic) as a DMA controller. Anyone know of a system using this approach? It seems like it would be simpler to just use the extra cpu as an io processor. Thanks for any comments, Mark Shepard shepard@{decpa.pa,gatekeeper}.dec.com
daveh@cbmvax.commodore.com (Dave Haynie) (09/07/90)
In article <1990Sep3.081020.26060@wrl.dec.com> shepard@finch.pa.dec.com (Mark Shepard) writes: >I'm curious about the tradeoffs involved in implimenting DMA. >Are designers moving away from standard DMA chips, and instead designing >custom DMA logic into ASICs for each system? Can anyone describe the >DMA architecture of pc systems such as the Mac (or MacII) and Amiga, and >high-end workstations such as Apollo and HP? In the Amiga 3000, which is the only Amiga with full 32 bit DMA, it made a considerable amout of sense for us to do our own 32 bit wide DMA controller, even if there were off the shelf alternatives. The DMA functions are actually split between two custom chips, the DMAC and the DRAM controller (RAMSEY). The DMAC communicates with the SCSI chip and provides a FIFO, 32 bit data path, and the primary DMA control functions. The RAMSEY chip is a custom DRAM controller which normally manages 16 Megsbytes of memory. Since it needed most of the system address lines anyway to manage this memory, it gets them all, and provides the 32 bit address for the DMA controller during DMA transfers. This allowed us to save costs by keeping both chips to 84 pins each, yet since it's a full speed 32 bit non-muliplexed 68030 bus master, it's fast. >Mark Shepard shepard@{decpa.pa,gatekeeper}.dec.com -- Dave Haynie Commodore-Amiga (Amiga 3000) "The Crew That Never Rests" {uunet|pyramid|rutgers}!cbmvax!daveh PLINK: hazy BIX: hazy Get that coffee outta my face, put a Margarita in its place!