Copyright 2002, Analog Devices, Inc. All rights reserved. Analog Devices assumes no responsibility for customer product d esign or the use or a pplication of custo me r s’ products or for
any infringements of patents or rights of others which may result from Analog Devices assistance. All trademarks and logos are property of their respective holders. Information
furnished by Analog Devices Applications and Development Tools Engineers is believed to be accurate and reliable, however no responsibility is assumed by Analog Devic es regar ding
the technical accuracy of the content provided in all Analog Devices’ Engineer-to-Engineer Notes.
Contributed by Robert Hoffmann, European DSP Applications Rev1 (20-March-02)
The ABC of SDRAMemory
As new algorithms for digital signal processing are developed, the memory requirements for these
applications will continue to grow. Not only while these applications require more memory, but they will
require increased efficiency for data storage and retrieval.
The ADSP-21065L, ADSP-21161N and ADSP-TS101S from the floating-point family of DSPs from
Analog Devices Inc. have been designed with an on-chip SDRAM interface allowing applications to
gluelessly incorporate less expensive SDRAM devices into designs. Also, some new members from the
fixed-point family, ADSP-21532 and ADSP-21535, will have an SDRAM interface.
Introduction
This application note will demonstrate the complexity of the SDRAM technology. It will illustrate, that
an SDRAM is not “just a memory”.
The first part shows the basic internal DRAM circuits with their timing specifications. In the second
part, the SDRAM architecture units with their features are discussed. Furthermore, the timing specs and
set of commands are illustrated with the help of state diagrams. The last part deals with the different
access modes, burst performance and controller’s address mapping schemes. Moreover some SDRAM
standards are introduced.
a
1 – DRAM Technology............................................................................................................................. 4
1.1 – The storage cell ............................................................................................................................ 4
1.2 – The surrounding circuits............................................................................................................... 4
2.2 - Data Units ................................................................................................................................... 10
12.2 – Intel Standard ........................................................................................................................... 36
Links and References .............................................................................................................................. 36
1 – DRAM Technology
Two common storage options are static random access memory (SRAM) and dynamic random access
memory (DRAM). The functional difference between an SRAM and DRAM is how the device stores its
data.
In an SRAM data is stored in up to 6 transistors, which hold their value until you overwrite them with
new data. The DRAM stores data in the capacitors, which gradually loose their charge and, without
refreshing, will loose data.
The Synchronous DRAM-Technology, originally offered for main storage use in personal computers, is
not completely new, but is an advanced development from the DRAM-Technology. The interface works
in a synchronous manner, which makes the hardware requirements easier to fulfill.
1.1 – The storage cell
As figure 1 points out, the binary information is stored in a unit consisting of a transistor and a very
small capacitor of about 20-40 fF (Femtofarad, 0.020-0.040 pF) for each cell. A charged capacitor has a
logical 1, a discharged capacitor a logical 0.
Additionally, the figure shows an example with a 1bit I/O structure. Typical structures are 4, 8, 16 and
32 bit. The 1 bit architecture has disappeared, because of the high density memory requests from the
market. Structures of 4, 8, 16 or 32 bit require less hardware intensive solutions.
1.2 – The surrounding c i r cuits
The capacitor storage cell needs surrounding circuits such as precharge circuits, sense amplifiers, I/O
gates, word and bit lines.
EE-126 Page 4
Technical Notes on using Analog Devices’ DSP components and development tools
Just by powering up the device brings the memory into an undefined state. Then, a command is required
to bring the bank in Idle mode. The memory is now in a defined state and can be accessed properly.
Row Activation
The row address decoder (figure 1) starts accessing the precharge circuit with the word line, when t he
~RAS line is asserted. Both inputs (positive and negative bit line) of the sense amp (opamp) are
precharged with VDD/2. This sequence is current intensive and requires some time. In the meantime, the
row’s sense amp starts gating. Both inputs of the sense amp are now the same voltage VDD/2. The word
line switch connects the storage cell to the positive bit line. Depending on the capacitor’s cell charge, the
potential increases (VDD/2 +∆V) or decreases (VDD/2 -∆V) on the positive bit line.
Because the capacitance of 40 fF (0.040 pF) is much smaller than the transistor’s and the bit line’s
capacitance, the voltage difference is typically only about ∆V=100 mV. The small sensed voltage is
amplified to level VDD or 0. The sense amp acts as a latch to store the sensed value. After the sensing
has finished (spec tRCD), the device is ready for read or writes operation.
Note: The advantage of the precharge technique is, that only the difference between the positive and
negative bit lines must be amplified, not the absolute value of 100 mV, thus increasing reliability.
Figure 1: Basic DRAM storage cell
~RAS
Row
decoder
Row
Address
Word line
Storage
Column
Address
Cell
Precharge Circuit
switch
Sense Amplifier
Column Decoder
VDD/2
pos. bit line
I/O Gate
~CAS
Precharge Circuit
neg. bit line
switch
Sense Amplifier
Column Decoder
VDD/2
pos. bit line
I/O Gate
neg. bit line
~OE
Read
DQ
Write
~WE
EE-126 Page 5
Technical Notes on using Analog Devices’ DSP components and development tools
After the spec tRCD is satisfied, the assertion of ~CAS causes the column decoder to select the
dedicated sense amp. In parallel, the ~WE enables the write drivers to feed the data directly into the
sense amp.
Column Read
Basically, there is a difference bet ween read and write operations. After the spec tRCD is satisfied, the
assertion of ~CAS causes the column decoder to select the dedicated sense amp. In parallel, the ~OE
starts the read latch to drive the data from the sense amp to the output.
Note: The read operation (unlike write) discharges the capacitor. In order to restore the information in
the cell, additional logic performs a precharge sequence.
Row Precharge
If the next write or read access falls in another row, the current row (page) must be closed or
“precharged”. Binary zeros or ones stored in the sense amp during the row activation will rewrite the
storage cell during precharge (spec tRP).
Note: After precharge, the row returns in idle state.
Note: Don’t mix up the precharge circuit with precharge command.
Row Refresh
The refresh is simply a sequence based on activation followed by a precharge (spec tRC=tRAS+tRP) but
with disabled I/O buffer. The sense amp reads the storage cell during the tRAS time, immediately
followed by a precharge tRP to rewrite the cell.
Note: Refresh must occur periodically for each row after the specified time tREF
1.3 – Timing Issues
The capacitor cell is accessed in a multiplexed manner (figure 2):
The RAS line is asserted through the activate command; all bit lines are now biased to VDD/2. In
parallel, all the row’s sense amps (depending on page size) are gated. Finally, the values are stored,
requiring the time tRCD (RAS to CAS delay).
Now, any column can be opened by a read or write command (CAS li ne asserted). If you write to the
cell, the write command and the data will be sampled in the same clock cycle. The next read access falls
in a different row, causing a precharge sequence within the following time frame tRP (precharge period).
EE-126 Page 6
Technical Notes on using Analog Devices’ DSP components and development tools
Note: DRAM accesses are multiplexed, first row address followed by column address.
1.4 – Refresh Issues
The DRAM must refresh the row each time the spec tREF is el apsed. The row refresh pattern is free
until the time tREF is satisfied for each row. 3 different refresh modes are available:
Note: The Refresh uses internal read during tRAS and write during tRP.
Note: The Refresh row cycle tRC=tRAS+tRP.
RAS only Refresh
The external row address during the falling edge of the ~RAS pin starts a refresh each time it is required.
Note: The RAS only refresh requires an external address counter.
Hidden Refresh
This mode is similar to the CBR refresh. The external row address (falling edge of ~RAS) and column
address (falling edge of the ~CAS) starts an internal hidden refresh using the internal refresh counter,
each time it is required.
Note: The Hidden Refresh can only be used for continuous access of DRAM.
CAS before RAS Refresh
The CBR- or auto refresh is started by deassertion of ~CAS followed by the deassertion of ~RAS, that
means in reversed order. Hereby, the device requires no external address to full fill a refresh. The
EE-126 Page 7
Technical Notes on using Analog Devices’ DSP components and development tools
internal refresh counter will handle this job. The time gap between refreshing two successive rows in a
classical DRAM is 15,625µs. The refresh period adds up to Rows/tREF in spec terms. In this particular
mode, the data transactions are periodically interrupted by auto refresh commands.
Note: The CBR refresh is comfortable and reduces the power dissipation.
Figure 3: The Capacitors Refresh
tRC
Capacitor
Voltage
row0row1row0
minimum
sense
threshold
time
tREF
1.5 – SRAM vs. DRAM
SRAMs are generally simple from a hardware and software perspective. Every read or write instruction
is a single access, and wait states can be programmed to access slower memories if desired. The
disadvantage of SRAMs is that large memories and fast memories, for system that desire zero wait
states, are expensive. DRAMs have the advantage of address multiplexing, thus needing less address
lines. Additionally, they are available in larger capacities than SRAMs because of the high-density cell.
The main disadvantage is the need for refresh and precharge operations.
2 – Architecture SDR A M
As the speed of processors cont inues t o increase, t he speed of standard DRAM’s becom es inadequate. In
order to improve the overall system performance, the operations have to be synchronized with the
system clock cycles. Toshiba's Tecra 700 was the first computer to use SDRAM for main memory, and
Kingston Technology has supported the Tecra since its initial release in November 1995. Figure 4
demonstrates the simplified pipelined architecture of an SDRAM.
When synchronous memories use a pipelined architecture (registers for in- and output signals) they
produce additional performance gains. In a pipelined device, the internal memory array needs only to
present its data to an internal register to be latched rather than pushing the data off the chip to the rest of
the system. Because the array only sees the internal delays, it presents data to the latch faster than it
would if it had to drive off chip. Further, once the latch captures the array’s data, the array can start
preparing for the next memory cycle while the latch drives the rest of the system.
EE-126 Page 8
Technical Notes on using Analog Devices’ DSP components and development tools
Figure 4: Simplified Pipelined Architecture of a 4M x 4bit x 4banks
CKE
CLK
~CS
~RAS
~CAS
~WE
A10
DQM
DQ3:0
A11:0
BA0
BA1
Command
AND
CLK
Buffer
Self
Refresh
Timer
REF
Decoder
Command
RAS, CAS
WE, A10
RAS, CAS
WE
CLK
RAS
CAS
CKE
CBR
Refresh
Logic
Input
Address
Buffer
CAS
RAS
A11:0
A9:0
Address
Latch
Row
Decoder
CLK, WE
DQM
DQ
Buffer
Address
Latch
Burst
Counter
Word
Lines
4096
MRS
WE
DQ3:0
ADDR
MRS
Bank A
DRAM Core
(4096 x 1024 x 4bit)
Bit Lines
Data Control
Circuit
1024
Column
Decoder
1024 x 4bit
Mode
Register
2.1 – Command Units
Relevant units: Control buffer, Command decoder, Mode register
Command buffer
All input control signals are sampled with the positive edge of the CLK, making the timing requirements
(setup- and hold times) much easier to meet. The CKE pin is used to enable the CLK operation of the
SDRAM.
Note: The pulsed external SDRAM timing uses internal DRAM timing.
Command Decoder
This unit is the heart of the memory device: The inputs trigger a state machine, which is part of the
command logic. During the rising CLK edge, the command logic decodes the lines ~RAS, ~CAS, ~WE,
~CS, A10 and executes the command.
Note: The command decoder is enabled with ~CS low
Mode Register
The mode register stores the data for controlling the various operation modes of SDRAM. The current
mode is determined by the address lines values.
EE-126 Page 9
Technical Notes on using Analog Devices’ DSP components and development tools
During mode register setup, all the addresses and bank select pins are used to configure the SDRAM.
2.2 - Data Units
Relevant units: address buffer, address latches for row- and column, decoder for row- and column,
refresh logic, burst counter, data control circuit and DQ buffer
Address Buffer
The input address buffer latches the current address of the specific command. The RAS and CAS strobes
from the command decoder indicate whether the row or th e column address latch is select ed. The buffer
is used for address pipelining, which means that during reads more than one address (depending from
read latency) can be latched until data is available.
Note: The address pipeline is an important performance benefit vs. asynchronous memories.
Address Decoder
The row decoder drives the selected word lines of the array. To access i.e. 4096 rows, you need 12
address lines. The column decoder drives the selected bit lines; its length represents the page size.
Typical I/O-structures are:
4 bit => 4096 words page size
4 bit => 2048 words page size
4 bit => 1024 words page size
8 bit => 512 words page size
16 bit => 256 words page size
32 bit => 256 words page size
Note: The bigger the I/O-structure the smaller the page size.
Decoding 1024 words takes 10 address lines. The matrix is called a memory array or memory bank. The
matrix size is 4096 x 1024 x 4bit = 4M x 4bit each bank. You can find 2 or 4 independent banks; this
value depends typical on the SDRAM size:
• 16 Mbit => 2 banks
• >16 Mbit => 4 banks
Refresh Logic
SDRAMs use the CBR refresh to benefit from the internal refresh counter. All rows must be refreshed
during the specified maximum refresh time tREF in order to avoid data loss. The refresh counter starts
addressing the rows in all banks simultaneously each time requests arrive from the external controller or
the internal timer (self refresh) by asserting CAS before RAS line. The pointer increments automatically
to the next address after each refresh and wraps around after a full period is over.
EE-126 Page 10
Technical Notes on using Analog Devices’ DSP components and development tools
Note: The auto refresh (CBR refresh) is the refresh mode for SDRAM used in standard data
transactions.
List for refresh values:
Size Row tREF Refresh Rate tRC/Row
16Mbit 2k 32 ms 64 kHz 15,625 µs
64Mbit 2k 32 ms 64 kHz 15,625 µs
64Mbit 4k 64 ms 64 kHz 15,625 µs
128Mbit 4k 64 ms 64 kHz 15,625 µs
256Mbit 8k 64 ms 128 kHz 7,812 µs
512Mbit 8k 64 ms 128 kHz 7,812 µs
DQ Buffer
The DQs buffer register the data on the rising edge of clock. The DQM pin (mask pin) controls the data
buffer. In read mode, DQM controls the output buffers like the conventional ~OE pin on DRAMs.
DQM=high and DQM=low switch the output buffer off and on.
In write mode, ~WE is asserted, DQM controls the word mask. Input data is written to the cell if DQM
is low but not if DQM is high.
The fixed DQM latency is:
• 2 clock cycles for reads
• no latency for writes
Vendors offer independent DQM[x] pins depending on the I/O structure. It’s featured to control the data
nibble- or byte wise to allow for instance byte write accesses. If not desired, the DQM[x] pins must be
interconnected.
Note: The SDRAM controller controls the DQ buffer by the state of DQM pin.
I/O size number of DQMs masked word size
4 bit 1 1 nibble
8 bit 2 1 nibble
16 bit 2 1 byte
32 bit 4 1 byte
Additional, masking is used to block SDRAM’s data buffer during precharge, while invalid data may be
written at the same clock cycle as the precharge command. To prevent this from happening, the DQM
pin is tied high at the same clock as the precharge, this blocks the data of the burst operation.
Moreover, masking during read to write transitions is useful to avoid data contention caused by different
latencies.
Note: The DQM pin is used to optimize read to write transitions.
EE-126 Page 11
Technical Notes on using Analog Devices’ DSP components and development tools