[comp.lsi] DRAM senseamp waveform

mark@mips.UUCP (Mark G. Johnson) (12/09/86)

In article <8612071534.AA02524@vlsi.caltech.edu>, speck@VLSI.CALTECH.EDU
	(Don Speck) writes:

> I'm designing a dRAM in Mosis SCMOS, and I'm finding that the
> sticky part is generating a properly controlled current for
> the sense amp.
	[... graph deleted ...]
> I've tweaked together circuits (in spice) that produce the right current
> waveform, but the waveshape breaks down miserably if the supply voltage
> changes.  Commercial dRAM's work over fairly wide voltage ranges, surely
> this problem has been solved before...	Any suggestions?
> How do real dRAM designers make their sense ramp generators?
----------------------------------------------------------------------------

Your article indicates that you intend to precharge the bitlines of your
DRAM to the positive supply voltage, VDD, and then use cross-coupled NMOS
transistors for sensing.

This is emphatically NOT what "real dRAM designers" do in a CMOS dynamic RAM.

The standard arrangement in real CMOS DRAMs is to precharge the bitlines to
0.5*VDD and use two cross-coupled pairs: one NMOS and one PMOS.  Real DRAM
designers do this because it saves power, cuts substrate noise, speeds up
sensing, and does a host of other Good Things.  The IEEE Journal of
Solid-State Circuits publishes a fat issue every October that's chock full
of good papers on real CMOS DRAMs; you may want to check out the last couple
of October issues.

However, what makes good sense for a 1-Megabit CMOS DRAM sold for $3.00
might not make sense for a university project DRAM built in MOSIS CMOS.

If you want to stick with VDD-precharged-bitlines for simplicity, you'll
probably want to use the classic NMOS sense amp from the 4K DRAM days.
The idea here is to insert a linear resistor between the high-capacitance
bitline and the low-capacitance senseamp nodes.  This lets you yank down
pretty hard on the senseamp tail node, with the resistors "isolating"
the bitlines while the senseamp is deciding.

The method of implementing the linear resistors varies; the classic stunt
is to use an NMOS device with a bootstrap level (1.6*VDD) on its gate.
IBM and Intel have successfully used depletion-mode NMOS devices as well.
Opinions vary on what the exact value of resistance should be; I recommend
that you select R such that the RC timeconstant (37% point) of the bitline
is about 15-20 nanoseconds.

Note that the resistor-isolated sense amp doesn't need a critically-shaped
tailnode waveform.  The classic circuit used with this sense amp includes
two tailnode pulldowns, the "tickle" and the "thump".  Tickle is a small
pulldown sized so that when you turn it on, it pulls the tailnode down slowly
and poops out at VDD-VTN when all of those sense amp transistors turn on
(connecting a huge source of Q to the dinky Tickle transistor).  The sense
amps are now amplifying the small delta-V on the low capacitance, isolated
sense nodes.  A few nanoseconds later you activete the Thump transistor
which is a monster.  This yanks down the tailnode as quick as it can go,
but the resistor isolation prevents the high bitline from drooping much.

Also, you get to decide "when do I dare to activate the column-select circuits
which connect the bitlines to the output buffer?"  Most real DRAMS are pretty
aggressive and do this when the bitline delta-V is about 1 VT, i.e. about
5 nanoseconds after Thump.


-- 
-Mark Johnson   	DISCLAIMER: The opinions above are personal.
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mark   TEL: 408-720-1700 x208
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086