Texas Instruments | AM65x/DRA80xM DDR Board Design and Layout lines (Rev. A) | Application notes | Texas Instruments AM65x/DRA80xM DDR Board Design and Layout lines (Rev. A) Application notes

Texas Instruments AM65x/DRA80xM DDR Board Design and Layout lines (Rev. A) Application notes
Application Report
SPRACI2A – October 2018 – Revised March 2019
AM65x/DRA80xM DDR Board Design and Layout
Guidelines
ABSTRACT
The goal of this document is to describe how to make the AM65x/DRA80xM DDR system implementation
straightforward for all designers. The requirements have been distilled down to a set of layout and routing
rules that allow designers to successfully implement a robust design for the topologies TI supports.
1
2
3
Contents
Overview ...................................................................................................................... 4
1.1
Board Designs Supported ......................................................................................... 4
1.2
General Board Layout Guidelines ................................................................................ 4
1.3
PCB Stack-up ....................................................................................................... 4
1.4
Bypass Capacitors .................................................................................................. 5
1.5
DDR RESET ......................................................................................................... 7
1.6
Velocity Compensation ............................................................................................. 7
DDR4 Board Design and Layout Guidance .............................................................................. 8
2.1
DDR4 Introduction .................................................................................................. 8
2.2
DDR4 Device Implementations Supported ...................................................................... 8
2.3
DDR4 Interface Schematics ....................................................................................... 9
2.4
Compatible JEDEC DDR4 Devices ............................................................................. 12
2.5
Placement .......................................................................................................... 12
2.6
DDR4 Keepout Region ........................................................................................... 13
2.7
VPP ................................................................................................................. 14
2.8
Net Classes ........................................................................................................ 14
2.9
DDR4 Signal Termination ........................................................................................ 14
2.10 VREF Routing ..................................................................................................... 15
2.11 VTT .................................................................................................................. 15
2.12 POD Interconnect ................................................................................................. 15
2.13 CK and ADDR_CTRL Topologies and Routing Guidance ................................................... 15
2.14 Data Group Topologies and Routing Guidance ............................................................... 18
2.15 CK and ADDR_CTRL Routing Specification................................................................... 19
2.16 Data Group Routing Specification............................................................................... 21
2.17 Bit Swapping ....................................................................................................... 22
LPDDR4 Board Design and Layout Guidance ......................................................................... 23
3.1
LPDDR4 Introduction ............................................................................................. 23
3.2
LPDDR4 Device Implementations Supported ................................................................. 23
3.3
LPDDR4 Interface Schematics .................................................................................. 23
3.4
Compatible JEDEC LPDDR4 Devices .......................................................................... 28
3.5
Placement .......................................................................................................... 28
3.6
LPDDR4 Keepout Region ........................................................................................ 29
3.7
Net Classes ........................................................................................................ 29
3.8
LPDDR4 Signal Termination ..................................................................................... 30
3.9
LPDDR4 VREF Routing .......................................................................................... 30
3.10 LPDDR4 VTT ...................................................................................................... 30
3.11 CK and ADDR_CTRL Topologies ............................................................................... 30
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
1
www.ti.com
4
3.12 Data Group Topologies ...........................................................................................
3.13 CK and ADDR_CTRL Routing Specification...................................................................
3.14 Data Group Routing Specification...............................................................................
3.15 Channel, Byte, and Bit Swapping ...............................................................................
DDR3L Board Design and Layout Guidance ...........................................................................
4.1
DDR3L Introduction ...............................................................................................
4.2
DDR3L Device Implementations Supported ...................................................................
4.3
DDR3L Interface Schematics ....................................................................................
4.4
Compatible JEDEC DDR3L Devices ...........................................................................
4.5
Placement ..........................................................................................................
4.6
DDR3L Keepout Region ..........................................................................................
4.7
Net Classes ........................................................................................................
4.8
DDR3L Signal Termination.......................................................................................
4.9
VREF Routing .....................................................................................................
4.10 VTT ..................................................................................................................
4.11 CK and ADDR_CTRL Topologies and Routing Guidance ...................................................
4.12 Data Group Topologies and Routing Guidance ...............................................................
4.13 CK and ADDR_CTRL Routing Specification...................................................................
4.14 Data Group Routing Specification...............................................................................
4.15 Bit Swapping .......................................................................................................
31
31
32
33
34
34
34
34
37
37
37
37
38
38
38
38
38
38
39
39
List of Figures
.......................................
1
32-Bit, Single-Rank DDR4 Implementation With ECC Using x16 SDRAMs
2
32-Bit, Single-Rank DDR4 Implementation With ECC Using x8 SDRAMs ......................................... 11
3
Placement Specifications
4
DDR Keepout Region ...................................................................................................... 13
5
CK Topology for Five SDRAM Devices ................................................................................. 16
6
ADDR_CTRL Topology for Five SDRAM Devices ..................................................................... 16
7
CK Routing for Five SDRAM Devices ................................................................................... 17
8
ADDR_CTRL Routing for Five SDRAM Devices....................................................................... 17
9
DQS Topology .............................................................................................................. 18
10
DQ/DM Topology ........................................................................................................... 18
11
DQS Routing to Five SDRAM Devices .................................................................................. 19
12
DQ/DM Routing to Five SDRAM Devices ............................................................................... 19
13
32-Bit, Single-Rank LPDDR4 Implementation (No ECC) ............................................................. 24
14
16-Bit, Single-Rank LPDDR4 Implementation (No ECC) ............................................................. 25
15
16-Bit, Single Rank LPDDR4 Implementation (With ECC) ........................................................... 26
16
16-Bit, Single-Rank / Channel LPDDR4 Implementation (No ECC) ................................................. 27
17
LPDDR4 Placement Specification
.................................................................................................
10
12
24
.......................................................................................
LPDDR4 Keepout Region .................................................................................................
LPDDR4 CK Topology.....................................................................................................
LPDDR4 ADDR_CTRL Topology ........................................................................................
LPDDR4 DQS Topology ..................................................................................................
LPDDR4 DQ/DM Topology ...............................................................................................
32-Bit, Single-Rank DDR3L Implementation With ECC Using x16 SDRAMs ......................................
32-Bit, Single-Rank DDR3L Implementation With ECC Using x8 SDRAMs........................................
1
PCB Stack-up Specifications ............................................................................................... 5
2
Bulk Bypass Capacitors..................................................................................................... 5
3
High-Speed Bypass Capacitors............................................................................................ 6
4
Supported DDR4 SDRAM Combinations ................................................................................. 9
18
19
20
21
22
23
28
29
30
30
31
31
35
36
List of Tables
2
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
www.ti.com
5
Compatible JEDEC DDR4 Devices ...................................................................................... 12
6
Placement Parameters .................................................................................................... 13
7
Clock Net Class Definitions ............................................................................................... 14
8
Signal Net Class Definitions .............................................................................................. 14
9
CK and ADDR_CTRL Routing Specifications .......................................................................... 20
10
Data Group Routing Specifications ...................................................................................... 22
11
Supported LPDDR4 SDRAM Combinations ............................................................................ 23
12
Compatible JEDEC LPDDR4 Devices................................................................................... 28
13
LPDDR4 Placement Parameters ......................................................................................... 28
14
Clock Net Class Definitions ............................................................................................... 29
15
Signal Net Class Definitions .............................................................................................. 29
16
CK and ADDR_CTRL Routing Specifications .......................................................................... 32
17
Data Group Routing Specifications ...................................................................................... 33
18
Compatible JEDEC DDR3L Devices .................................................................................... 37
19
Clock Net Class Definitions ............................................................................................... 37
20
Signal Net Class Definitions .............................................................................................. 38
Trademarks
All trademarks are the property of their respective owners.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
3
Overview
1
www.ti.com
Overview
The AM65x/DRA80xM processor supports three different types of DDR memories: DDR4, LPDDR4, and
DDR3L. This allows customer board designs to be implemented with the memory type that best meets
their target market at the lowest possible DDR SDRAM cost. This document is divided into four sections.
The first section contains material applicable to board designs containing any of the three DDR SDRAM
memory types. This is followed by sections containing information specific to each of the DDR memory
types.
1.1
Board Designs Supported
The goal of this document is to make the AM65x/DRA80xM DDR system implementation straightforward
for all designers. Requirements have been distilled down to a set of layout and routing rules that allow
designers to successfully implement a robust design for the topologies that TI supports. At this time, TI
does not provide timing parameters for the processor’s DDR PHY interface.
It is still expected that the PCB design work (design, layout, and fabrication) is performed and reviewed by
a highly knowledgeable high-speed PCB designer. Problems such as impedance discontinuities when
signals cross a split in a reference plane can be detected visually by those with the proper experience.
TI only supports board designs using DDR4, LPDDR4, or DDR3L memory that follow the guidelines in this
document. These guidelines are based on well-known transmission line properties for copper traces
routed over a solid reference plane. Declaring insufficient PCB space does not allow routing guidelines to
be discounted.
1.2
General Board Layout Guidelines
To
•
•
•
•
•
•
•
•
•
•
ensure good signaling performance, the following general board design guidelines must be followed:
Avoid crossing plane splits in the signal reference planes.
Use the widest trace that is practical between decoupling capacitors and memory modules.
Minimize ISI (inter-symbol interference) by keeping impedances matched.
Minimize crosstalk by isolating sensitive signals, such as strobes and clocks, and by using a proper
PCB stack-up.
Avoid return path discontinuities by adding vias or capacitors whenever signals change layers and
reference planes.
Minimize reference voltage noise through proper isolation and proper use of decoupling capacitors on
the reference input pins on the SDRAMs.
Keep the signal routing stub lengths as short as possible.
Add additional spacing for clock and strobe nets to minimize crosstalk.
Maintain a common ground (also called VSS) reference for all bypass and decoupling capacitors.
Consider the differences in propagation delays between microstrip and stripline nets when evaluating
timing constraints.
Refer to the High-Speed Interface Layout Guidelines Application Report. It provides additional general
guidance for successful routing of high-speed signals.
1.3
PCB Stack-up
The minimum stack-up for routing the DDR interface is a six-layer stack up. However, this can only be
accomplished on a board with routing room with large keep-out areas. Additional layers are required if:
• The PCB layout area for the DDR Interface is restricted, which limits the area available to spread out
the signals to minimize crosstalk.
• Other circuitry must exist in the same area, but on layers isolated from the DDR routing.
• Additional planes layers are needed to enhance the power supply routing or to improve EMI shielding.
Board designs that are relatively dense require 10 or more layers to properly allow the DDR routing to be
implemented such that all rules are met.
4
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
Overview
www.ti.com
DDR signals with the highest frequency content (such as data or clock) must be routed adjacent to a solid
VSS reference plane. Signals with lower frequency content (such as address) can be routed adjacent to
either a solid VSS or a solid VDDS_DDR reference plane. If a VDDS_DDR reference plane is used,
bypass capacitors must be implemented near both ends of every route to provide a low-inductance, AC
path to ground for these routes. Similarly, when multiple VSS reference planes exist in the DDR routing
area, stitching vias must be implemented nearby wherever vias transfer signals to a different VSS
reference plane. This is required to maintain a low-inductance return current path.
Some PCB stack-ups implement signal routing on 2 adjacent layers. This is acceptable only as long as the
routing on these layers is perpendicular and does not allow for broad-side coupling. Severe crosstalk
occurs on any trace routed parallel to another trace on an adjacent layer, even for a short distance. Also,
DDR signal routing on 2 adjacent layers is only allowed when implementing offset stripline routing, where
the distance between the adjacent routing layers is more than 3x the distance from the traces to their
adjacent reference plane.
Table 1. PCB Stack-up Specifications
Num
ber
Parameter
MIN
PS1
PCB routing plus plane layers
6
PS2
Signal routing layers
3
PS3
Full VSS reference layers under DDR routing region (1)
(2)
(3)
(4)
1.4
MAX
UNIT
1
PS4
Full VDDS_DDR power reference layers under the DDR routing region
PS5
Number of reference plane cuts allowed within DDR routing region (2)
PS6
Number of layers between DDR routing layer and reference plane (3)
PS7
PCB routing feature size
PS8
PCB trace width, w
PS9
Single-ended impedance, Zo
(1)
1
0
0
4
Mils
4
40
PS10 Impedance control (4)
(1)
TYP
Z-5
Z
Mils
50
Ω
Z+5
Ω
Ground reference layers are preferred over power reference layers. Include bypass caps to accommodate reference layer return
current, as the trace routes switch routing layers.
No traces should cross reference plane cuts within the DDR routing region. High-speed signal traces crossing reference plane
cuts create large return current paths, which can lead to excessive crosstalk and EMI radiation. Beware of reference plane voids
caused by via antipads, as these also cause discontinuities in the return current path.
Reference planes are to be directly adjacent to the signal layer, to minimize the size of the return current loop.
Z is the nominal singled-ended impedance selected for the PCB specified by PS9.
Bypass Capacitors
1.4.1
Bulk Bypass Capacitors
Bulk bypass capacitors are required for moderate speed bypassing of the DDR SDRAMs and other
circuitry. Table 2 contains the minimum numbers and capacitance required for the bulk bypass capacitors.
Table 2 only covers the bypass needs of the DDR PHY and DDR SDRAM devices. Additional bulk bypass
capacitance may be needed for other circuitry. For any additional SDRAM requirements, see the
manufacturer's data sheet.
Table 2. Bulk Bypass Capacitors
Numbe Parameter
r
(1)
MIN
MAX
UNIT
1
VDDS_DDR bulk bypass capacitor count (1)
1
Devices
2
VDDS_DDR bulk bypass total capacitance
22
µF
These devices should be placed near the devices they are bypassing, but preference should be given to the placement of the
high-speed (HS) bypass capacitors and DDR signal routing.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
5
Overview
1.4.2
www.ti.com
High-Speed Bypass Capacitors
High-speed (HS) bypass capacitors are critical for proper DDR interface operation. It is particularly
important to minimize the parasitic series inductance of the HS bypass capacitors to VDDS_DDR and the
associated ground connections. Table 3 contains the specification for the HS bypass capacitors and for
the power connections on the PCB. Generally speaking, TI recommends:
• Fitting as many HS bypass capacitors as possible.
• Minimizing the distance from the bypass capacitor to the pins and balls being bypassed.
• Using the smallest physical sized ceramic capacitors possible with the highest capacitance readily
available.
• Connecting the bypass capacitor pads to their vias using the widest traces possible and using the
largest via hole size possible.
• Minimizing via sharing. Note the limits on via sharing shown in Table 3.
For any additional SDRAM requirements, see the manufacturer's data sheet.
Table 3. High-Speed Bypass Capacitors
Numb Parameter
er
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
1.4.3
MIN
TYP
MAX
UNIT
0201
0402
10 Mils
1
HS bypass capacitor package size (1)
2
Distance, HS bypass capacitor to processor being bypassed (2) (3) (4)
3
Processor HS bypass capacitor count per VDDS_DDR rail
See PDN Guide
(5)
4
Processor HS bypass capacitor total capacitance per VDDS_DDR rail
See PDN Guide
(5)
5
Number of connection vias for each device power/ground ball
6
Trace length from processor power/ground ball to connection via (2)
7
Distance, HS bypass capacitor to DDR device being bypassed
8
DDR device HS bypass capacitor count (7)
9
DDR device HS bypass capacitor total capacitance (7)
400
µF
1
Vias
35
(6)
10
Number of connection vias for each HS capacitor
11
Trace length from bypass capacitor to connection via (2) (9)
12
Number of connection vias for each DDR device power/ground ball
13
Trace length from DDR device power/ground ball to connection via (2) (2)
70
150
(8) (9)
Mils
Device
s
Mils
Mils
12
Device
s
0.85
µF
2
Vias
35
100
1
Mils
Vias
35
60
Mils
LxW, 10-mil units, that is, a 0402 is a 40x20-mil surface-mount capacitor.
Closer/shorter is preferable.
Measured from the nearest processor power or ground ball to the center of the capacitor package.
Three of these capacitors should be located underneath the processor, among the cluster of VDDS_DDR balls.
The capacitor recommendations in this guide reflect only the needs of this processor. See the memory vendor’s guidelines for
determining the appropriate decoupling capacitor arrangement for the memory device itself.
Measured from the DDR device power or ground ball to the center of the capacitor package. Refer to the guidance from the
SDRAM manufacturer.
Per DDR device. Refer to the guidance from the SDRAM manufacturer.
An additional HS bypass capacitor can share the connection vias only if it is mounted on the opposite side of the board. No
sharing of vias is permitted on the same side of the board.
An HS bypass capacitor may share a via with a DDR device mounted on the same side of the PCB. A wide trace should be used
for the connection, and the length from the capacitor pad to the DDR device pad should be less than 150 mils.
Return Current Bypass Capacitors
Use additional bypass capacitors if the return current reference plane changes due to DDR signals
hopping from one signal layer to another, resulting in the reference plane changing from VDDS_DDR to
VSS. The bypass capacitor here provides a path for the return current to hop planes along with the signal.
Use as many of these return current bypass capacitors as possible – up to one per signal via. Because
these are returns for signal current, the via size for these bypass capacitors can be the smaller via used
for signal routing.
6
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
Overview
www.ti.com
1.5
DDR RESET
The AM65x/DRA80xM DDR PHY contains two reset outputs: DDR_RESETn and DDR_FS_RESET. All
schematics in this document show the use of the DDR_RESETn pin and not the DDR_FS_RESETn pin.
These reset outputs are functionally identical. When enabled, these outputs transition both high and low
simultaneously. However, the DDR_FS_RESET is driven from a special fail-safe buffer. Fail-safe output
buffers are designed such that they do not have dependencies on the respective I/O power supply
voltage. This allows external voltage sources to be connected to the output buffer while the respective I/O
power supplies are turned off. Specifically for DDR_FS_RESET, this pin can be connected to a pull-up
resistor that holds this pin at a voltage even when the VDDS_DDR I/O supply is powered off. This
capability has been added to the DDR PHY to support possible low-power modes where the processor is
powered down while the DDR SDRAMs maintain their contents in Self-Refresh. Support for low-power
modes is still under study.
1.6
Velocity Compensation
Because portions of the DDR signal traces are microstrip (top and bottom layers) while others are stripline
(internal layers), and because there is a wide variation in the proportion of track length routed as
microstrip or stripline, the length/delay matching process should include a mechanism for compensating
for the velocity delta between these two types of PCB interconnects. A compensation factor of 1.1 has
been specified for this purpose by JEDEC. All microstrip segment lengths are to be divided by 1.1 before
summation into the length matching equation. The resulting compensated length is termed the 'stripline
equivalent length'. While some amount of residual velocity mismatch skew remains in the design, the
process is a substantial improvement over simple length matching.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
7
DDR4 Board Design and Layout Guidance
2
DDR4 Board Design and Layout Guidance
2.1
DDR4 Introduction
www.ti.com
DDR4 board designs are similar to DDR3 board designs. Fly-by routing is required just as it is with DDR3,
and thus leveling is required. To achieve higher data rates with DDR4, there are many enhancements
added to the interface specification that must be accommodated by both the SDRAM and the processor’s
interface (PHY). The enhancements that affect the board interconnect and layout are listed below:
• Addition of ACT_n pin – This pin provides signaling to allow the pins previously called Command pins
(RAS_n, CAS_n and WE_n) to be used as additional address pins. These pins behave as row address
pins when ACT_n is low and as command pins when ACT_n is high. This is valid only when CS_n is
low.
• Removal of one bank address pin and addition of 2 bank group pins – This adds flexibility with
accesses similar to the DDR3, but with 16 banks bundled in four bank groups of 4 banks each. This
results in additional timing parameters, because adjacent accesses within a bank group are faster than
adjacent accesses to another bank group. Successive accesses to locations within a single bank are
the fastest option.
• Addition of PARITY and ALERT_n pins (use is optional) – The PARITY pin supplies parity monitoring
for the command and address pins using even parity from the controller to the SDRAM. ALERT_n is
the indicator (open-drain output) from the SDRAMs that indicate when a parity error has been
detected.
• Change to POD termination – Pseudo open drain (POD) output buffers are implemented rather than
traditional SSTL push-pull outputs. This allows the data bit termination, ODT, to go to the I/O power
rail, VDDQ, rather than to the mid-level voltage, VTT. Power consumption may be reduced, because
only driving a bit low draws current.
• Addition of DBI – Data bus invert (DBI) is a feature that allows the data bus to be inverted whenever
more than half of the bits are zero. This feature may reduce active power and enhance the data signal
integrity when coupled with POD termination.
• Addition of a VPP power input – The VPP power supply (2.5 V) provides power to the internal word
line logic. This voltage increase allows the SDRAM to reduce overall power consumption.
• Separation of data VREF from address/control VREF – The data reference voltage, VREFDQ, is now
internally generated both within the SDRAM and within the PHY. It can be programmed to various
levels to provide the optimum sampling threshold. The optimum threshold varies based on the ODT
impedance chosen, the drive strength, and the PCB track impedance. The address/control reference
voltage, VREFCA, is a mid-level reference voltage, the same as it is on DDR3.
NOTE: These features may not be supported on all devices. Refer to the device-specific
documentation for supported features.
2.2
DDR4 Device Implementations Supported
There are several possible combinations of SDRAM devices supported by the DDR4 EMIF. Table 4 lists
the supported device combinations. The SDRAMs used in each combination must be identical: that is,
they must have the same part number.
8
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR4 Board Design and Layout Guidance
www.ti.com
Table 4. Supported DDR4 SDRAM Combinations
2.3
Number of DDR4 SDRAMs
DDR4 SDRAM Width (bits)
DDR4 EMIF Width (bits)
1
16
16
2
16
32
2
16
16 plus ECC
3
16
32 plus ECC
2
8
16
3
8
16 plus ECC
4
8
32
5
8
32 plus ECC
DDR4 Interface Schematics
This section discusses implementations (also called topologies) using single-rank x16 and x8 SDRAM
devices. This section does not discuss recommendations for implementations that support low-power
operation, such as when the SDRAM is held in self-refresh and the processor is powered off. It also does
not discuss the DDR-less implementations. These options are under study and may be supported in future
versions of this document.
2.3.1
DDR4 Implementation Using 16-bit SDRAM Devices
The DDR4 interface schematics vary, depending upon the width of the DDR4 SDRAM devices used, the
width of the EMIF bus implemented (16 or 32 bits), and whether ECC is implemented. General
connectivity is straightforward and consistent between the implementations. 16-bit SDRAM devices look
like two 8-bit devices. Figure 1 shows the schematic connections for a 32-bit interface with ECC using x16
devices. If ECC is not desired, the third SDRAM is not implemented. Similarly, a 16-bit interface only uses
a single x16 SDRAM.
When not using one or more of the byte lanes on the processor, the proper method of handling the
unused pins is to tie off the unused DDR_DQSxP pins to ground through a 1k-Ω resistor and to tie off the
unused DDR_DQSxN pins to the VDDS_DDR supply, also referred to as the I/O supply VDDQ, through a
1k-Ω resistor. This must be done for each byte not used. Although these signals have internal pullups and
pulldowns, external pullups and pulldowns provide additional protection against external electrical noise
causing activity on the signals.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
9
DDR4 Board Design and Layout Guidance
www.ti.com
DQ15
NC
DQ8
NC
VDDS_DDR
UDM_n/UDB I_n
UDQ S_t
UDQ S_c
DQ7
DDR_ECC_D6
DQ6
7
DDR_ECC_D0
DQ0
DDR_ECC_DM
LDM_n/LDBI_n
DDR_ECC_DQSP
LDQS_t
DDR_ECC_DQSN
LDQS_c
DDR_DQ31
DQ15
8
DDR_DQ24
DQ8
DDR_DM3
UDM_n/UDB I_n
DDR_DQS3P
UDQ S_t
DDR_DQS3N
UDQ S_c
DDR_DQ23
DQ7
8
DDR_DQ16
DQ0
DDR_DM2
LDM_n/LDBI_n
DDR_DQS2P
LDQS_t
DDR_DQS2N
DDR_DQ15
LDQS_c
8
DQ15
DDR_DQ8
DQ8
DDR_DM1
UDM_n/UDB I_n
DDR_DQS1P
UDQ S_t
DDR_DQS1N
UDQ S_c
DDR_DQ7
8
DQ7
DDR_DQ0
DQ0
DDR_DM0
LDM_n/LDBI_n
DDR_DQS0P
LDQS_t
DDR_DQS0N
LDQS_c
DDR_CK0P
CK_t
CK_t
CK_t
DDR_CK0N
CK_c
CK_c
CK_c
DDR_CK1P
NC
DDR_CK1N
NC
DDR_AC0/A0
14
DDR_AC13/A13
VDDS_DDR
Zo
A0
A0
A0
A13
A13
A13
DDR_AC14/A14/WE_n
WE_n/A14
WE_n/A14
WE_n/A14
DDR_AC15/A15/CAS_n
CAS_n/A15
CAS_n/A15
CAS_n/A15
DDR_AC16/A16/RAS_n
RAS_n/A16
RAS_n/A16
RAS_n/A16
A17
A17
A17
Zo
Zo
DDR_AC17/A17
ACT_n
ACT_n
ACT_n
DDR_AC19/BA0
BA0
BA0
BA0
DDR_AC20/BA1
BA1
BA1
BA1
DDR_AC21/BG0
BG0
BG0
BG0
DDR_AC22/BG1
BG1
BG1
BG1
DDR_AC23/PAR
PAR
PAR
PAR
CS_n
CS_n
CS_n
ODT
ODT
ODT
DDR_AC18/ACT_n
Zo
VTT
Zo
DDR_AC24/CS0_n
DDR_AC27/CS1_n
NC
DDR_AC25/ODT0
DDR_AC28/ODT1
NC
DDR_AC26/CKE0
DDR_AC29/CKE1
Zo
CKE
CKE
CKE
ALE RT_n
ALE RT_n
ALE RT_n
RESET_n
RESET_n
RESET_n
NC
VDDS_DDR
DDR_ALE RTn
DDR_RESETn
DDR_FS_RESET
NC
DDR_VREF0
NC / TP
DDR_VREF_ZQ
NC / TP
DDR_VREF
VREFCA
VREFCA
ZQ
VREFCA
ZQ
240
ZQ
240
240
DDR_VTP
240 1%
20 mW
Figure 1. 32-Bit, Single-Rank DDR4 Implementation With ECC Using x16 SDRAMs
10
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR4 Board Design and Layout Guidance
www.ti.com
2.3.2
DDR4 Implementation Using 8-bit SDRAM Devices
Figure 2 shows the schematic connections for a 32-bit interface with ECC using x8 devices. If ECC is not
desired, the fifth SDRAM is not implemented. Similarly, a 16-bit interface only uses a pair of x8 SDRAMs
and a 16-bit interface with ECC uses three x8 SDRAMs.
DDR_ECC_D6
DQ7
7
DDR_ECC_D0
DQ0
DDR_ECC_DM
DM_n/DBI_n
DQS_t
DDR_ECC_DQSP
DQS_c
DDR_ECC_DQSN
DDR_DQ31
DQ7
8
DDR_DQ24
DQ0
DDR_DM3
DM_n/DBI_n
DDR_DQS3P
DQS_t
DDR_DQS3N
DDR_DQ23
DQS_c
DQ7
8
DDR_DQ16
DQ0
DM_n/DBI_n
DDR_DM2
DQS_t
DDR_DQS2P
DQS_c
DDR_DQS2N
DDR_DQ15
DQ7
8
DDR_DQ8
DQ0
DDR_DM1
DM_n/DBI_n
DDR_DQS1P
DQS_t
DDR_DQS1N
DDR_DQ7
DQS_c
8
DQ7
DDR_DQ0
DQ0
DDR_DM0
DM_n/DBI_n
DDR_DQS0P
DQS_t
DDR_DQS0N
DQS_c
DDR_CK0P
DDR_CK0N
DDR_CK1P
NC
DDR_CK1N
NC
DDR_AC0/A0
CK_t
CK_t
CK_t
CK_t
CK_t
CK_c
CK_c
CK_c
CK_c
CK_c
A0
A0
A0
A0
A0
A13
A13
A13
A13
A13
DDR_AC14/A14/WE_n
WE_n/A14
WE_n/A14
WE_n/A14
WE_n/A14
WE_n/A14
DDR_AC15/A15/CAS_n
CAS_n/A15
CAS_n/A15
CAS_n/A15
CAS_n/A15
CAS_n/A15
DDR_AC16/A16/RAS_n
RAS_n/A16
RAS_n/A16
RAS_n/A16
RAS_n/A16
RAS_n/A16
A17
A17
A17
A17
A17
ACT_n
ACT_n
ACT_n
ACT_n
ACT_n
DDR_AC19/BA0
BA0
BA0
BA0
BA0
BA0
DDR_AC20/BA1
BA1
BA1
BA1
BA1
BA1
DDR_AC21/BG0
BG0
BG0
BG0
BG0
BG0
DDR_AC22/BG1
BG1
BG1
BG1
BG1
BG1
DDR_AC23/PAR
PAR
PAR
PAR
PAR
PAR
CS_n
CS_n
CS_n
CS_n
CS_n
ODT
ODT
ODT
ODT
ODT
CKE
CKE
CKE
CKE
CKE
ALE RT_n
ALE RT_n
ALE RT_n
ALE RT_n
ALE RT_n
RESET_n
RESET_n
RESET_n
RESET_n
RESET_n
14
DDR_AC13/A13
VDDS_DDR
Zo
Zo
Zo
DDR_AC17/A17
DDR_AC18/ACT_n
Zo
VTT
Zo
DDR_AC24/CS0_n
DDR_AC27/CS1_n
NC
DDR_AC25/ODT0
DDR_AC28/ODT1
NC
DDR_AC26/CKE0
DDR_AC29/CKE1
Zo
NC
VDDS_DDR
DDR_ALE RTn
DDR_RESETn
DDR_FS_RESET
NC
DDR VREF
DDR_VREF0
NC / TP
VREFCA
DDR_VREF_ZQ
NC / TP
ZQ
VREFCA
VREFCA
ZQ
240
VREFCA
ZQ
240
VREFCA
ZQ
240
ZQ
240
240
DDR_VTP
240 1%
20 mW
Figure 2. 32-Bit, Single-Rank DDR4 Implementation With ECC Using x8 SDRAMs
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
11
DDR4 Board Design and Layout Guidance
2.4
www.ti.com
Compatible JEDEC DDR4 Devices
Table 5 shows the parameters of the JEDEC DDR4 devices compatible with this interface. Generally, the
DDR4 interface is compatible with all JEDEC-compliant DDR4 SDRAM devices in x8 or x16 widths.
Table 5. Compatible JEDEC DDR4 Devices
Number Parameter
(1)
(2)
(3)
2.5
MIN
MAX
JEDEC DDR4 device bit width
x8
x16
Bits
JEDEC DDR4 device count (3)
1
5
Device
s
1
JEDEC DDR4 data rate (1) (2)
2
3
UNIT
MT/s
Refer to the device data manual for supported data rates.
SDRAMs in faster speed grades can be used provided they are properly configured to operate at the supported data rates.
Faster speed grade SDRAMs may have faster edge rates, which may affect signal integrity. SDRAMs with faster speed grades
must be validated on the target board design.
For valid DDR4 device configurations and device counts, see Figure 1 and Figure 2.
Placement
Figure 3 shows the required placement for the processor and the DDR4 devices. The dimensions for this
figure are defined in Table 6. The placement does not restrict the side of the PCB on which the devices
are mounted. The ultimate purpose of the placement is to limit the maximum trace lengths and allow for
proper routing space.
N9
A1
x1
y3
N9
A1
y1
AH1
A1
y3
N9
A1
y3
N9
A1
y2
y3
AH28
A28
N9
A1
Figure 3. Placement Specifications
12
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR4 Board Design and Layout Guidance
www.ti.com
Table 6. Placement Parameters
Number Parameter
2.6
MAX
UNIT
1
x1
MIN
2000
Mils
2
y1
2000
Mils
3
y2
1000
Mils
4
y3
750
Mils
DDR4 Keepout Region
The region of the PCB used for DDR4 circuitry must be isolated from other signals. The DDR4 keepout
region is defined for this purpose and is shown in Figure 4. The size of this region varies with the
placement and DDR routing. Non-DDR4 signals should not be routed on the DDR signal layers within the
DDR4 keepout region. Non-DDR4 signals may be routed in this region only if they are routed on other
layers separated from the DDR signal layers by a ground layer. No breaks are allowed in the reference
ground layers in this region. In addition, a solid VDDS_DDR power plane should exist across the entire
keepout region.
N9
Byte 0
DDR Keepout
Region
A1
N9
A1
AH1
A1
N9
DDR
Controller
A1
N9
Byte n
A1
AH28
A28
N9
ECC Byte
A1
Figure 4. DDR Keepout Region
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
13
DDR4 Board Design and Layout Guidance
2.7
www.ti.com
VPP
VPP is a new supply input on DDR4 SDRAMs. This supply must provide an average of less than 5 mA in
active and standby modes and 10 to 20 mA during refresh. This is not a constant current draw during
refresh. The VPP power supply and decoupling capacitors must be able to supply short bursts of current
up to 60 mA during this time.
2.8
Net Classes
Routing rules are applied to signals in groups called net classes. Each net class contains signals with the
same routing requirements. This simplifies the implementation and compliance of these routes. Table 7
lists the clock net classes for the DDR4 interface. Table 8 lists the signal net classes, and associated
clock net classes, for signals in the DDR4 interface. These net classes are then linked to the termination
and routing rules that follow.
Table 7. Clock Net Class Definitions
Clock Net Class
Processor Pin Names
CK0
DDR_CK0P / DDR_CK0N
CK1
DDR_CK1P / DDR_CK1N
DQS0
DDR_DQS0P / DDR_DQS0N
DQS1
DDR_DQS1P / DDR_DQS1N
DQS2 (1)
DDR_DQS2P / DDR_DQS2N
(1)
DDR_DQS3P / DDR_DQS3N
DQS3
ECC DQS (2)
(1)
(2)
DDR_ECC_DQSP / DDR_ECC_DQSN
Only used on 32-bit wide DDR4 memory systems.
Only used on DDR4 memory systems with ECC.
Table 8. Signal Net Class Definitions
Signal Net Class
(1)
Processor Pin Names
ADDR_CTRL
CK0 / CK1
BYTE0
DQS0
DDR_DQ[7:0], DDR_DM0
BYTE1
DQS1
DDR_DQ[15:8], DDR_DM1
BYTE2 (2)
DQS2
DDR_DQ[23:16], DDR_DM2
BYTE3 (2)
DQS3
DDR_DQ[31:24], DDR_DM3
ECC BYTE (3)
ECC DQS
DDR_ECC_D[6:0], DDR_ECC_DM
(1)
(2)
(3)
2.9
Associated Clock Net Class
DDR_AC[17,13:0], DDR_AC14/WE_n, DDR_AC15/CAS_n,
DDR_AC16/RAS_n, DDR_AC18/ACT_n, DDR_AC19/BA0,
DDR_AC20/BA1, DDR_AC21/BG0, DDR_AC22/BG1,
DDR_AC23/PAR, DDR_AC24/CS0_n, DDR_AC25/ODT0,
DDR_AC26/CKE0, DDR_AC27/CS1_n, DDR_AC28/ODT1,
DDR_AC29/CKE1
Although the CK1 clock group is independent from the CK0 clock group, all signals in this routing group are associated to CK0
for topologies supported. The CK1 clock group should be length matched with the CK0 clock group.
Only used on 32-bit wide DDR4 memory systems.
Only used on DDR4 memory systems with ECC.
DDR4 Signal Termination
Signal terminators are required for the CK0/1 and ADDR_CTRL net classes. This is shown in Figure 1 and
Figure 2. The data group nets are terminated by ODT in the processor and SDRAM memories, and thus
the data group PCB traces must be unterminated. Detailed termination specifications are covered in the
routing rules in the following sections.
14
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR4 Board Design and Layout Guidance
www.ti.com
2.10 VREF Routing
JEDEC defines two reference voltages that are used with DDR4 memory interfaces. These are VREFDQ
and VREFCA. VREFDQ is the reference voltage used for the data group nets during reads and writes.
VREFCA is the reference voltage used for command and address inputs to the SDRAMs. DDR4 SDRAMs
generate their own VREFDQ internally. Similarly, the processor's DDR4 PHY generates its own VREFDQ
internally. The VREFCA reference voltage must be generated on the board and propagated to all of the
SDRAMs. VREFCA is intended to be 50% of the DDR4 power supply voltage and is typically generated
with the DDR4 VTT power supply. It should be routed as a nominal 20-mil wide trace with 0.1-μF bypass
capacitors near each device connection. Narrowing the VREF trace is allowed to accommodate routing
congestion for short lengths near endpoints.
2.11 VTT
As with VREFCA, the nominal value of the VTT supply is 50% of the DDR4 supply voltage. Unlike
VREFCA, the VTT supply is expected to source and sink current; specifically the termination current for
the ADDR_CTRL net class Thevenin terminators. VTT is needed at the end of the address and control
bus and it should be routed as a power sub-plane. VTT must be bypassed near the terminator resistors.
2.12 POD Interconnect
Prior to DDR4, the output buffers were push-pull CMOS buffers. They would sink current when driving low
and source current when driving high. They were then terminated to a mid-level Thevenin resistance to
obtain optimum power transfer and signal integrity. Unfortunately, this resulted in current flowing, and
power being dissipated, whenever the buffers were enabled at either high or low. Pseudo Open Drain
(POD) is a connection type where the termination at the load, ODT, is only connected to VDDQ. POD
connections only consume power when driving low, thus reducing power. In DDR4, both the PHY (for
reads) and SDRAM (for writes) provide these terminations to VDDQ internally on all of the data group
pins.
Signals look different on connections using POD terminations as compared to previous DDR connections,
where the data group signals went from VSS to VDDQ and sampling was based on a mid-level reference
voltage. The high level is still at VDDQ. However, the low level is now calculated based on the drive
impedance and the ODT resistance. If they are both set to 50 Ω, the low-level voltage is now at VDDQ/2.
That then requires a sampling voltage half way between those voltages, or 3/4*VDDQ, for optimum
performance.
2.13 CK and ADDR_CTRL Topologies and Routing Guidance
The CK and ADDR_CTRL net classes are routed similarly, and are length matched from the DDR PHY in
the processor to each SDRAM to minimize skew between them. The CK0 and CK1 net classes require
more care because they run at a higher transition rate and are differential.
The CK and ADDR_CTRL net classes are routed in a ‘fly-by’ implementation. This means that the CK and
ADDR_CTRL net classes are routed as a multi-drop bus from the DDR controller in the processor
sequentially to each SDRAM, and each signal has a termination at the end. To complete this routing, a
small stub trace exists on each net at each SDRAM. These stubs must be short and approximately the
same length to manage the reflections. The ADDR_CTRL net class is length matched to the CK net class,
at each SDRAM, so that the ADDR_CTRL signals are properly sampled at each SDRAM.
NOTE: Fly-by routing is required for DDR4 layouts. Balanced-T routing, previously used for DDR2
layouts, is not supported.
Section 2.2 discussed that there are multiple possible memory topologies, or implementations, ranging
from a single x16 SDRAM up to a maximum of five x8 SDRAMs. Regardless of the number of SDRAMs
implemented, the routing requirements must be followed. TI recommends that all SDRAMs be
implemented on the same side of the board, preferably on the same side of the board as the processor. It
is possible to implement the SDRAMs on both sides of the board, but the routing complexity and the
number of PCB layers required is significantly increased.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
15
DDR4 Board Design and Layout Guidance
www.ti.com
Figure 5 shows the topology of the CK net classes, and Figure 6 shows the topology for the corresponding
ADDR_CTRL net class. The fly-by routes have been broken into segments to simplify the length matching
analysis. Care must be taken to avoid excessive length error accumulation with this method.
+ ±
+ ±
+ ±
+ ±
+ ±
AS+
AS-
AS+
AS-
AS+
AS-
AS+
AS-
AS+
AS-
Segments A1 and A2 comprise the lead-in section. Segment AT is the track to the termination at the end
of the net. Segments A3 are the routed track between the stubs that branch off to each SDRAM. For
topologies with fewer SDRAMs, remove an A3 segment for each SDRAM not present. Length matching
requirements for the routing segments are detailed in Table 9.
Clock Parallel
Terminator
Rcp
A1
Processor
Differential Clock
Output Buffer
A2
A3
A3
A3
A3
AT
+
Cac
Routed as
Differential Pair
±
Rcp
A1
A2
A3
A3
A3
A3
0.1uF
VDDS_DDR
AT
AS
AS
AS
AS
AS
Figure 5. CK Topology for Five SDRAM Devices
Address and Control
Terminator
Processor
Address and control
Output Buffer
Rtt
A1
A2
A3
A3
A3
A3
AT
VTT
Figure 6. ADDR_CTRL Topology for Five SDRAM Devices
The previous figures show the circuit topology such that the track lengths can be managed and the routed
track length matching rules can be followed. The next two figures again show the routing for the CK and
ADDR_CTRL routing groups depicted from the perspective of tracks routed on the PCB.
Figure 7 shows the CK routing for five SDRAM devices. The fly-by routing is made clear in this figure. The
CK0P and CK0N tracks (the CK0 routing group) are routed as a differential pair from the processor to the
SDRAM at the end that will contain Byte 0 data. This differential pair routing then proceeds to each
SDRAM sequentially and ends with the AC termination to VDDS_DDR. The routing also includes the
routing stubs for both CK0P and CK0N at each SDRAM.
The second clock pair (the CK1 routing group) is only used when twin-die SDRAMs are mounted. In that
implementation, CK1P and CK1N must also be routed as a differential pair and all of their segments must
be length matched along with the CK0 routing group. It is expected that both clock pairs will route along
the same path to simplify this routing.
Figure 8 shows the ADDR_CTRL routing for five SDRAM devices. These are also routed in a fly-by
manner along the same path because the ADDR_CTRL routing group is length-matched to the CK0
routing group.
16
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR4 Board Design and Layout Guidance
www.ti.com
A1
AS-
A2
A1
A2
=
AS+
Routed as
Differential Pair
Rcp
Cac
0.1uF
AT
A3
A3
A3
A3
AT
A3
A3
A3
A3
VDDS_DDR
Figure 7. CK Routing for Five SDRAM Devices
A1
A2
AS
=
Rtt
AT
A3
A3
A3
A3
VTT
Figure 8. ADDR_CTRL Routing for Five SDRAM Devices
The absolute order is not significant. The fly-by routing that starts at the processor can also route down to
the SDRAM containing the ECC data (or whichever SDRAM that is opposite in the row from the one
containing the byte 0 data). The fly-by routing then proceeds to each SDRAM in the order as discussed
above, until it routes to VTT through the Rtt termination after the byte 0 SDRAM.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
17
DDR4 Board Design and Layout Guidance
www.ti.com
Minimize layer transitions during routing. If a layer transition is necessary, it is preferable to transition to a
layer using the same reference plane. If this cannot be accommodated, ensure there are nearby stitching
vias to allow the return currents to transition between reference planes when both reference planes are
ground or VDDS_DDR. Alternately, ensure there are nearby bypass capacitors to allow the return currents
to transition between reference planes when one of the reference planes is ground and the other is
VDDS_DDR. This must occur at every reference plane transition. The goal is to minimize the size of the
return current path thus minimizing the inductance in this path. Lack of these stitching vias or capacitors
results in impedance discontinuities in the signal path that increase crosstalk and signal distortion.
2.14 Data Group Topologies and Routing Guidance
Regardless of the number of DDR4 devices implemented, the data line topology is always point-to-point.
Minimize layer transitions during routing. If a layer transition is necessary, it is better to transition to a layer
using the same reference plane. If this cannot be accommodated, ensure there are nearby ground vias to
allow the return currents to transition between reference planes. The goal is to provide a low inductance
path for the return current. Also, to optimize the length matching, TI recommends routing all nets within a
single data routing group on one layer where all have the exact same number of vias and the same via
barrel length.
DQSP and DQSN lines are point-to-point signals routed as a differential pair. Figure 9 shows the DQS
connection topology.
+
DQS+
Processor
DQS IO
Buffer
DQS-
±
+
±
DDR
SDRAM
DQS IO
Buffer
Routed as
Differential Pair
Figure 9. DQS Topology
DQ and DM lines are point-to-point signals routed singled-ended. Figure 10 shows the DQ and DM
connection topology.
Processor
DQ and DM
IO Buffer
DDR SDRAM
DQ and DM
IO Buffer
DQ/DM
Figure 10. DQ/DM Topology
Similar to the figures above for the CK and ADDR_CTRL routes, Figure 11 and Figure 12 show an
example of the PCB routes for a DQS routing group and the associated data routing group nets.
The routing example shows DQS0P and DQS0N, which are routed as a differential pair from the
processor to the SDRAM that contains Byte 0. This is implemented as a point-to-point routed differential
pair without any board terminations. There are no stubs allowed on these nets of any kind. All test access
probes must be in line without any branches or stubs. Similar DQS pair routing exists from the processor
to each SDRAM for the byte lanes implemented.
Figure 12 shows a routing example for a single net in the Byte 0 routing group. The DQ and DM nets are
routed single-ended and are also point-to-point without any stubs or board terminations. Point-to-point
routes exist for each of the DQ and DM nets implemented.
The DQ and DM nets are routed along the same path as the DQSP and DQSN pair for that byte lane, so
that they can be length matched to the DQS pair.
18
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR4 Board Design and Layout Guidance
www.ti.com
Routed as
Differential Pair
DQSP
DQSN
Figure 11. DQS Routing to Five SDRAM Devices
DQ/DM
Figure 12. DQ/DM Routing to Five SDRAM Devices
2.15 CK and ADDR_CTRL Routing Specification
Skew within the CK and ADDR_CTRL net classes directly reduces setup and hold margin for the
ADDR_CTRL nets. Thus, this skew must be controlled. Routed PCB track has a delay proportional to its
length. Thus, the delay skew must be managed through matching the lengths of the routed tracks within a
defined group of signals. The only way to practically match lengths on a PCB is to lengthen the shorter
traces up to the length of the longest net in the net class and its associated clock pair, CKP, and CKN.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
19
DDR4 Board Design and Layout Guidance
2.15.1
www.ti.com
CACLM - Clock Address Control Longest Manhattan Distance
A metric to establish a maximum length is Manhattan distance. The Manhattan distance between two
points on a PCB is the length between the points when connecting them only with horizontal or vertical
track segments. A reasonable limit to the trace route length is to its Manhattan distance plus some margin.
CACLM is this limit and it is defined as the Clock Address Control Longest Manhattan distance.
Given the clock and address pin locations on the processor and the DDR4 memories, the maximum
possible Manhattan distance can be determined given the placement of these parts. It is from this distance
that this rule-of-thumb limit on the lengths of the routed track for the CK and ADDR_CTRL routing groups
is determined.
It is likely that the longest CK and ADDR_CTRL Manhattan distance will be for Address Input A13 on the
DDR4 SDRAM device, because it is at the farthest corner in the placement. Assuming A13 is the longest,
calculate CACLM as the sum of CACLMY(A13) + CACLMX(A13) + 300 mils. The extra 300 mils allows for
routing past the first DDR4 SDRAM and returning up to reach pin A13. Use this as a guideline for the
upper limit to the length of the routed traces from the processor to the first SDRAM.
2.15.2
CK and ADDR_CTRL Routing Limits
Table 9 lists the limits for the individual segments that comprise the routing from the processor to the
SDRAM. These segment lengths coincide with the CK and ADDR_CTRL topology diagram shown
previously in Figure 5 and Figure 6. By matching the length for the same segments of all signals in a
routing group, the signal delay skews are controlled.
Recall that the CK and ADDR_CTRL nets route along the same path for each segment. This simplifies the
length matching. The skew limits for the CK0 group compare the length of CK0P to the length of CK0N.
Then the skew limits for the ADDR_CTRL nets are the limits when compared to the CK nets.
Most PCB layout tools can be configured to generate reports to assist with this validation. If this cannot be
generated automatically, this must be generated and verified manually.
Table 9 also lists skew limits for the full routes from the processor to each SDRAM. This must be checked
in addition to the skew limits in the individual sections to verify that there is not accumulating error in the
layout.
To use length matching (in mils) instead of time delay (in ps), multiply the ps limit by 5. The microstrip
routes propagate faster than stripline routes. A standard practice when using length matching is to divide
the microstrip length by 1.1, to achieve a compensated length to normalize the microstrip length with the
stripline length and to align with the delay limits provided. This is called velocity compensation.
Table 9. CK and ADDR_CTRL Routing Specifications
Numb
er
Parameter
(1)
(2)
(3)
20
MIN
MAX
UNIT
500 (1)
ps (2)
3
ps
3
ps
A3 length
125
ps
5
A1+A2 skew CKP to CKN
0.4
ps
6
A3 skew CKP to CKN
0.4
ps
7
AS length
5 (1)
17
ps
8
AS skew
1.3 (1)
3
ps
9
AS+/AS- length
17
ps
10
AS+/AS- skew
0.4
ps
1
A1+A2 length
2
A1+A2 skew ADDR_CTRL to CK (3)
4
A3 skew ADDR_CTRL to CK
3
TYP
(3)
5
Max value is based upon conservative signal integrity approach. This value could be extended only if detailed signal integrity
analysis of rise time and fall time confirms desired operation.
PCB track length shown as ps is a normalized representation of length. 1 ps can be equated to 5 mils as a simple
transformation. This is stripline equivalent length where velocity compensation must be used for all segments routed as
microstrip track.
ADDR_CTRL net class relative to its CK net class.
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR4 Board Design and Layout Guidance
www.ti.com
Table 9. CK and ADDR_CTRL Routing Specifications (continued)
Numb
er
Parameter
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
MIN
(4)
TYP
MAX
75
UNIT
11
AT length
12
AT skew ADDR_CTRL to CK
13
AT skew CKP to CKN
0.4
ps
14
Total CKP to CKN skew from processor to each SDRAM (5)
0.8
ps
(3)
ps
14
15
Total CK to ADDR_CTRL skew from processor to each SDRAM
16
Vias per trace (6)
17
Via count difference
(5)
(6)
(8)
18
Center-to-center CK to other DDR4 trace spacing
19
Center-to-center ADDR_CTRL to other DDR4 trace spacing (8)
4w
20
Center-to-center ADDR_CTRL to other ADDR_CTRL trace spacing (8)
3w
ps
4
ps
3 (1)
vias
1 (7)
vias
4w
(9) (10)
21
CK center-to-center spacing
22
CK spacing to other net (8)
23
Rcp (11)
Zo-1
Zo
Zo+1
Ω
24
Rtt (11) (12)
Zo-5
Zo
Zo+5
Ω
4w
While this length can be increased for convenience, its length should be minimized.
This is the combined length from the processor to the SDRAM. It must be computed for each SDRAM to ensure that the
segment matching does not result in accumulated error. For the first SDRAM, it is A1 + A2 + AS, computed for each signal. For
the 5th SDRAM, it is A1 + A2 + A3 + A3 + A3 + A3 + AS, computed for each signal.
Count vias individually from processor to each SDRAM.
Via count difference may increase by 1 only if accurate 3-D modeling of the signal flight times – including accurately modeled
signal propagation through vias – has been applied to ensure all segment skew maximums are not exceeded.
Center-to-center spacing is allowed to fall to minimum 2w for up to 500 mils of routed length (only near endpoints).
CK spacing set to ensure proper differential impedance.
The user must control the impedance so that inadvertent impedance mismatches are not created. Generally speaking, center-tocenter spacing should be either 2w or slightly larger than 2w to achieve a differential impedance equal to twice the single-ended
impedance, Zo, on that layer.
Source termination (series resistor at driver) is specifically not allowed.
Termination values should be uniform across the net class.
2.16 Data Group Routing Specification
Skew within the DQS and DQ/DM net classes directly reduces setup and hold margin for the DQ and DM
nets. Thus, this skew must be controlled. Routed PCB track has a delay proportional to its length. Thus,
the length skew must be managed through matching the lengths of the routed tracks within a defined
group of signals. The only way to practically match lengths on a PCB is to lengthen the shorter traces up
to the length of the longest net in the net class and its associated clock pair, DQSP, and DQSN.
2.16.1
DQLM - DQ Longest Manhattan Distance
As with CK and ADDR_CTRL, a reasonable trace route length is to within a percentage of its Manhattan
distance. DQLMn is defined as DQ Longest Manhattan distance n, where n is the byte number. For a 32bit interface, there are four DQLMs, DQLM0-DQLM3. Likewise, for a 16-bit interface, there are two
DQLMs, DQLM0-DQLM1, and when ECC is used, there is an additional one for the ECC data group.
NOTE: It is not required nor recommended to match the lengths across all byte lanes. Length
matching is only required within each byte.
Given the DQS, DQ, and DM pin locations on the processor and the DDR4 memories, the maximum
possible Manhattan distance can be determined given the placement. It is from this distance that and
upper limit on the lengths of the transmission lines for the data bus can be established. Unlike the
CACLM, there is no margin added to the DQLMn limits. These limits are simply the sum of the horizontal
and vertical distances for the longest pin to pin route for that byte group.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
21
DDR4 Board Design and Layout Guidance
2.16.2
www.ti.com
Data Group Routing Limits
Table 10 contains the routing specifications for DQS, DQ, and DM routing groups. Each byte lane is
routed and matched independently.
To use length matching (in mils) instead of time delay (in ps), multiply the ps limit by 5. The microstrip
routes propagate faster than stripline routes. A standard practice when using length matching is to divide
the microstrip length by 1.1, to achieve a compensated length to normalize the microstrip length with the
stripline length and to align with the delay limits provided.
Table 10. Data Group Routing Specifications
Numbe Parameter
r
MIN
MAX
UNIT
DRS31 BYTE0 length
500
ps (1)
DRS32 BYTE1 length
500
ps
DRS33 BYTE2 length
500
ps
DRS34 BYTE3 length
500
ps
DRS34 ECC BYTE length
500
ps
DRS36 DQSn+ to DQSn- skew
0.4
ps
DRS37 DQSn to DQn skew (2) (3)
2
ps
DRS38 Vias per trace
2
(4)
vias
DRS39 Via count difference
0 (5)
vias
DRS31 Center-to-center BYTEn to other DDR4 trace spacing (6)
0
4
w (7)
DRS31 Center-to-center DQn to other DQn trace spacing (8)
1
3
w (7)
4
w (7)
DRS31 DQSn center-to-center spacing (9) (10)
2
DRS31 DQSn center-to-center spacing to other net
3
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
PCB track length shown as ps is a normalized representation of length. 1 ps can be equated to 5 mils as a simple
transformation. This is stripline equivalent length where velocity compensation must be used for all segments routed as
microstrip track.
Length matching is only done within a byte. Length matching across bytes is neither required nor recommended.
Each DQS pair is length matched to its associated byte.
Max value is based upon conservative signal integrity approach. This value could be extended only if detailed signal integrity
analysis of rise time and fall time confirms desired operation.
Via count difference may increase by 1 only if accurate 3-D modeling of the signal flight times – including accurately modeled
signal propagation through vias – has been applied to ensure DQn skew and DQSn to DQn skew maximums are not exceeded.
Other DDR4 trace spacing means other DDR4 net classes not within the byte.
Center-to-center spacing is allowed to fall to minimum 2w for up to 500 mils of routed length (only near endpoints).
This applies to spacing within the net classes of a byte.
DQS pair spacing is set to ensure proper differential impedance.
The user must control the impedance so that inadvertent impedance mismatches are not created. Generally speaking, center-tocenter spacing should be either 2w or slightly larger than 2w to achieve a differential impedance equal to twice the single-ended
impedance, Zo, on that layer.
2.17 Bit Swapping
2.17.1
Data Bit Swapping
Data bit swapping is allowed to simplify routing as long as the bits swapped are within the same byte
group. This is only possible when not using CRC. However, the prime bit, the lowest numbered bit in each
byte, must be connected to the corresponding bit on the SDRAM without swapping. That is, bit 0, bit 8, bit
16, and so forth. Also, the DM and DQS bits must not be swapped.
2.17.2
Address and Control Bit Swapping
Bit swapping of the address or control bits is not allowed, as this breaks functionality.
22
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
LPDDR4 Board Design and Layout Guidance
www.ti.com
3
LPDDR4 Board Design and Layout Guidance
3.1
LPDDR4 Introduction
LPDDR4 is an SDRAM device specification governed by the JEDEC standard JESD209-4, Low Power
Double Data Rate 4 (LPDDR4). This standard strives to reduce power and improve signal integrity by
implementing a lower voltage I/O power rail, employing ODT on the Command/Address bus, and reducing
the overall width of the Command/Address bus, among other features. Unlike other DDR types, LPDDR4
has been organized into 2 × 16-bit channels. The following sections detail the routing specification and
layout guidelines for an LPDDR4 interface.
3.2
LPDDR4 Device Implementations Supported
There are different LPDDR4 SDRAM combinations supported by the DDR subsystem. Table 11 lists the
supported LPDDR4 device combinations.
Table 11. Supported LPDDR4 SDRAM Combinations
Channels
Die
Ranks
LPDDR4
Channel Width
DDRSS Data
Width
DDRSS ECC
Width
1 (1)
1
1
1
16 bits
16 bits
0 bits
(2)
2
1
1
16 bits
32 bits
0 bits
1 (3)
2
1
1
16 bits
16 bits
0 bits
1 (4)
2
1
1
16 bits
16 bits
6 bits
(2)
2
2
1
16 bits
32 bits
0 bits
1 (3)
2
2
1
16 bits
16 bits
0 bits
1 (4)
2
2
1
16 bits
16 bits
6 bits
LPDDR4 SDRAM Count
1
1
(1)
(2)
(3)
(4)
See 16-Bit,
See 32-Bit,
See 16-Bit,
See 16-Bit,
3.3
Single-Rank / Channel LPDDR4 Implementation (No ECC).
Single-Rank LPDDR4 Implementation (No ECC).
Single-Rank LPDDR4 Implementation (No ECC).
Single-Rank LPDDR4 Implementation (With ECC).
LPDDR4 Interface Schematics
The LPDDR4 interface schematics vary, depending upon the width of the DDR subsystem bus
implemented (16 or 32 bits), and whether ECC is implemented. General connectivity is straightforward and
consistent between the implementations. Figure 13 illustrates a 32-bit, single-rank LPDDR4
implementation. Alternatively, a 16-bit interface may be achieved by only using the lower two byte lanes of
the DDR subsystem and channel A of the LPDDR4 interface, as illustrated in Figure 14. If ECC is
required, the data bus width is limited to 16-bits, and byte lane 3 is used for ECC. In this configuration,
byte lane 2 must still be connected to the LPDDR4 memory for training purposes.
Figure 15 illustrates a 16-bit, single-rank LPDDR4 implementation using ECC.
NOTE: The EMIF DDR ECC pins (DDR_ECC_DQSP, DDR_ECC_DQSN, DDR_ECC_DM, and
DDR_ECC_D[6:0]) are never used in any LPDDR4 implementation.
NOTE: Though LPDDR4 SDRAMs pin out 2 separate channels, independent channel use is not
supported by this processor. The ADDR_CTRL signals are replicated for load sharing
purposes and must be connected as shown.
When not using all or part of a DDR interface, the proper method of handling the unused pins is to tie off
the unused DDR_DQSxP pins to ground through a 1k-Ω resistor and to tie off the unused DDR_DQSxN
pins to the VDDS_DDR supply, also referred to as the I/O supply VDDQ, through a 1k-Ω resistor. This
must be done for each byte not used. Although these signals have internal pullups and pulldowns,
external pullups and pulldowns provide additional protection against external electrical noise causing
activity on the signals.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
23
LPDDR4 Board Design and Layout Guidance
www.ti.com
NC
DDR_AC[29:27]
DDR_AC[25]
NC
DDR_AC[23:14]
NC
DDR_ECC_D[6:0]
NC
DDR_ECC_DM
NC
DDR_ECC_DQSP
1k
VDDQ
DDR_ECC_DQSN
DDR_ALE RTn
DDR_FS_RESETn
DDR_VREF0
DDR_VREF_ZQ
VDDQ
1k
VDD2
RZQ
NC
ZQ
NC
NC / Test Point
NC / Test Point
DDR_DQ[31:24]
VDD2
1
ODT_CA_A
1
ODT_CA_B
DQ[15:8]_B
DDR_DM3
DM[1]_B
DDR_DQS3P
DQS[1]_t_B
DDR_DQS3N
DQS[1]_c_B
DDR_DQ[23:16]
DQ[7:0]_B
DDR_DM2
DM[0]_B
DDR_DQS2P
DQS[0]_t_B
DDR_DQS2N
DQS[0]_c_B
DDR_AC[11:6] / CA[5:0]_B
CA[5:0]_B
DDR_AC[12] / CS[0]_B
CS_B
DDR_AC[13] / CKE[0]_B
CKE_B
DDR_CK1P / CK_t_B
CK_t_B
DDR_CK1N / CK_c_B
CK_c_B
DDR_DQ[15:8]
DQ[15:8]_A
DDR_DM1
DM[1]_A
DDR_DQS1P
DQS[1]_t_A
DDR_DQS1N
DQS[1]_c_A
DDR_DQ[7:0]
DQ[7:0]_A
DDR_DM0
DM[0]_A
DDR_DQS0P
DQS[0]_t_A
DDR_DQS0N
DQS[0]_c_A
DDR_AC[5:0] / CA[5:0]_A
CA[5:0]_A
DDR_AC[24] / CS[0]_A
CS_A
DDR_AC[26] / CKE[0]_A
CKE_A
DDR_CK0P / CK_t_A
CK_t_A
DDR_CK0N / CK_c_A
CK_c_A
DDR_RESETn
RESET_n
240 1%
20 mW
DDR_VTP
(1) ODT_CA_A and ODT_CA_B are recommended to be pulled to V DD2. However, design ers
should conne ct them based on requir ements documented in the LP DDR 4 datasheet.
(2) Unused data and addre ss / control sig nals shoul d b e conn ected based on requ irements
documented in th e L PDDR4 datasheet.
Figure 13. 32-Bit, Single-Rank LPDDR4 Implementation (No ECC)
24
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
LPDDR4 Board Design and Layout Guidance
www.ti.com
NC
DDR_AC[29:27]
DDR_AC[25]
NC
DDR_AC[23:14]
NC
DDR_ECC_D[6:0]
NC
DDR_ECC_DM
NC
DDR_ECC_DQSP
1k
VDDQ
DDR_ECC_DQSN
VDD2
DDR_ALE RTn
DDR_VREF_ZQ
RZQ
NC
DDR_FS_RESETn
DDR_VREF0
VDDQ
1k
ZQ
NC
NC / Test Point
NC / Test Point
DDR_DQ[31:24]
NC
DDR_DM3
NC
1k
VDD2
1
ODT_CA_B
DQ[15:8]_B
VDDQ
1k
DDR_DQS3P
1
ODT_CA_A
DM[1]_B
DQS[1]_t_B
DDR_DQS3N
DQS[1]_c_B
DDR_DQ[23:16]
NC
DDR_DM2
NC
1k
VDDQ
DQ[7:0]_B
1k
DDR_DQS2P
DM[0]_B
DQS[0]_t_B
DDR_DQS2N
DQS[0]_c_B
DDR_AC[11:6] / CA[5:0]_B
NC
CA[5:0]_B
DDR_AC[12] / CS[0]_B
NC
CS_B
DDR_AC[13] / CKE[0]_B
NC
CKE_B
DDR_CK1P / CK_t_B
NC
CK_t_B
DDR_CK1N / CK_c_B
NC
CK_c_B
DDR_DQ[15:8]
DQ[15:8]_A
DDR_DM1
DM[1]_A
DDR_DQS1P
DQS[1]_t_A
DDR_DQS1N
DQS[1]_c_A
DDR_DQ[7:0]
DQ[7:0]_A
DDR_DM0
DM[0]_A
DDR_DQS0P
DQS[0]_t_A
DDR_DQS0N
DQS[0]_c_A
DDR_AC[5:0] / CA[5:0]_A
CA[5:0]_A
DDR_AC[24] / CS[0]_A
CS_A
DDR_AC[26] / CKE[0]_A
CKE_A
DDR_CK0P / CK_t_A
CK_t_A
DDR_CK0N / CK_c_A
CK_c_A
DDR_RESETn
240 1%
20 mW
RESET_n
DDR_VTP
(1) ODT_CA_A and ODT_CA_B are recommended to be pulled to V DD 2. However, design ers should conne ct
them b ase d o n re quiremen ts
documented in th e L PDDR4 datasheet.
(2) Unused data and addre ss / control sig nals shoul d b e conn ected based on requ irements documented in th e
LPDDR4 datasheet.
Figure 14. 16-Bit, Single-Rank LPDDR4 Implementation (No ECC)
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
25
LPDDR4 Board Design and Layout Guidance
DDR_AC[29:27]
DDR_AC[25]
DDR_AC[23:14]
DDR_ECC_D[6:0]
DDR_ECC_DM
DDR_ECC_DQSP
DDR_ECC_DQSN
DDR_ALE RTn
DDR_FS_RESETn
DDR_VREF0
DDR_VREF_ZQ
www.ti.com
NC
NC
NC
1k
NC
NC
VDD2
Use d for ECC
Use d for ECC
DDR_DM3
Use d for ECC
DDR_DQS3P
Use d for ECC
DDR_DQS3N
DDR_DM2
DDR_DQS2P
DDR_DQS2N
VDDQ
1k
NC
NC
NC / Test Point
VDD2
NC / Test Point
DDR_DQ[31:24]
DDR_DQ[23:16]
VDDQ
Use d for Training ONLY
Use d for Training ONLY
Use d for Training ONLY
Use d for Training ONLY
DDR_AC[11:6] / CA[5:0]_B
RZQ
ZQ
1
ODT_CA_A
1
ODT_CA_B
DQ[15:8]_B
DM[1]_B
DQS[1]_t_B
DQS[1]_c_B
DQ[7:0]_B
DM[0]_B
DQS[0]_t_B
DQS[0]_c_B
CA[5:0]_B
DDR_AC[12] / CS[0]_B
CS_B
DDR_AC[13] / CKE[0]_B
CKE_B
DDR_CK1P / CK_t_B
CK_t_B
DDR_CK1N / CK_c_B
CK_c_B
DDR_DQ[15:8]
DDR_DM1
DDR_DQS1P
DDR_DQS1N
DQ[15:8]_A
DM[1]_A
DQS[1]_t_A
DQS[1]_c_A
DDR_DQ[7:0]
DDR_DM0
DDR_DQS0P
DDR_DQS0N
DQ[7:0]_A
DM[0]_A
DQS[0]_t_A
DQS[0]_c_A
DDR_AC[5:0] / CA[5:0]_A
DDR_AC[24] / CS[0]_A
DDR_AC[26] / CKE[0]_A
DDR_CK0P / CK_t_A
DDR_CK0N / CK_c_A
DDR_RESETn
CA[5:0]_A
CS_A
CKE_A
CK_t_A
CK_c_A
240 1%
20 mW
RESET_n
DDR_VTP
(1) ODT_CA_A and ODT_CA_B are recommended to be pulled to VDD2.
However, designers should connect them based on requirements documented
in the LPDDR4 datas heet.
(2) Unused data and addre ss / control sig nals shoul d b e conn ected based on requ irements
documented in th e L PDDR4 datasheet.
Figure 15. 16-Bit, Single Rank LPDDR4 Implementation (With ECC)
26
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
LPDDR4 Board Design and Layout Guidance
www.ti.com
NC
DDR_AC[29:27]
DDR_AC[25]
NC
DDR_AC[23:14]
NC
DDR_ECC_D[6:0]
NC
DDR_ECC_DM
NC
DDR_ECC_DQSP
1k
VDDQ
1k
DDR_ECC_DQSN
DDR_ALE RTn
NC
DDR_FS_RESETn
NC
DDR_VREF0
DDR_VREF_ZQ
NC / Test Point
NC / Test Point
DDR_DQ[31:24]
NC
DDR_DM3
DDR_DQS3P
NC
1k
VDDQ
1k
DDR_DQS3N
DDR_DQ[23:16]
NC
DDR_DM2
NC
1k
VDDQ
1k
DDR_DQS2P
DDR_DQS2N
DDR_AC[11:6] / CA[5:0]_B
NC
DDR_AC[12] / CS[0]_B
NC
DDR_AC[13] / CKE[0]_B
NC
DDR_CK1P / CK_t_B
NC
DDR_CK1N / CK_c_B
NC
DDR_DQ[15:8]
VDDQ
VDD2
RZQ
ZQ
1
ODT_CA_A
DQ[15:8]_A
DDR_DM1
DM[1]_A
DDR_DQS1P
DQS[1]_t_A
DDR_DQS1N
DQS[1]_c_A
DDR_DQ[7:0]
DQ[7:0]_A
DDR_DM0
DM[0]_A
DDR_DQS0P
DQS[0]_t_A
DDR_DQS0N
DQS[0]_c_A
DDR_AC[5:0] / CA[5:0]_A
CA[5:0]_A
DDR_AC[24] / CS[0]_A
CS_A
DDR_AC[26] / CKE[0]_A
CKE_A
DDR_CK0P / CK_t_A
CK_t_A
DDR_CK0N / CK_c_A
CK_c_A
DDR_RESETn
240 1%
20 mW
RESET_n
DDR_VTP
(1) ODT_CA_A is recommende d to be pulled to VDD2. However, design ers should conne ct
ODT_CA_A based on requi rements documented in the LP DDR 4 datasheet.
(2) Unused data and addre ss / control sig nals shoul d b e conn ected based on requ irements
documented in th e L PDDR4 datasheet.
Figure 16. 16-Bit, Single-Rank / Channel LPDDR4 Implementation (No ECC)
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
27
LPDDR4 Board Design and Layout Guidance
3.4
www.ti.com
Compatible JEDEC LPDDR4 Devices
Table 12 shows the parameters of the JEDEC LPDDR4 devices compatible with this interface.
Table 12. Compatible JEDEC LPDDR4 Devices
Numb
er
(1)
(2)
3.5
Parameter
MIN
MAX
UNIT
1
Data Rate (1) (2)
2
Channel Bit Width
x16
x16
Bits
3
Channels
1
2
-
4
Ranks
1
1
-
5
Die
1
2
-
6
Device Count
1
1
-
Refer to the device data manual for supported data rates.
SDRAMs in faster speed grades can be used, provided they are properly configured to operate at the supported data rates.
Faster speed grade SDRAMs may have faster edge rates, which may affect signal integrity. SDRAMs with faster speed grades
must be validated on the target board design.
Placement
Figure 17 shows the required placement for the processor and the LPDDR4 device. The dimensions for
this figure are defined in Table 13. The placement does not restrict the side of the PCB on which the
devices are mounted. The ultimate purpose of the placement is to limit the maximum trace lengths and
allow for proper routing space.
x1
AB12
AH1
A1
y1
AH
A1
A28
Figure 17. LPDDR4 Placement Specification
Table 13. LPDDR4 Placement Parameters
Number
28
MAX
UNIT
1
Parameter
x1
MIN
2000
Mils
2
y1
1000
Mils
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
LPDDR4 Board Design and Layout Guidance
www.ti.com
3.6
LPDDR4 Keepout Region
The region of the PCB used for LPDDR4 circuitry must be isolated from other signals. The LPDDR4
keepout region is defined for this purpose and is shown in Figure 18. The size of this region varies with
the placement and DDR routing. Non-LPDDR4 signals should not be routed on the DDR signal layers
within the LPDDR4 keepout region. Non-LPDDR4 signals may be routed in this region only if they are
routed on other layers separated from the DDR signal layers by a ground layer. No breaks are allowed in
the reference ground layers in this region. In addition, a solid VDDS_DDR power plane should exist
across the entire keepout region.
AB12
AH1
A1
DDR
Controller
AH
DDR Keepout
Region
A1
A28
Figure 18. LPDDR4 Keepout Region
3.7
Net Classes
Routing rules are applied to signals in groups called net classes. Each net class contains signals with the
same routing requirements. This simplifies the implementation and compliance of these routes. Table 14
lists the clock net classes for the LPDDR4 interface. Table 15 lists the signal net classes, and associated
clock net classes, for signals in the LPDDR4 interface. These net classes are then linked to the
termination and routing rules that follow.
Table 14. Clock Net Class Definitions
Clock Net Class
Processor Pin Names
CK0
DDR_CK0P / DDR_CK0N
CK1
DDR_CK1P / DDR_CK1N
DQS0
DDR_DQS0P / DDR_DQS0N
DQS1
DDR_DQS1P / DDR_DQS1N
DQS2
DDR_DQS2P / DDR_DQS2N
DQS3
DDR_DQS3P / DDR_DQS3N
Table 15. Signal Net Class Definitions
Signal Net Class
Associated Clock Net Class Processor Pin Names
ADDR_CTRL_A
CK0
DDR_AC[5:0] / CA[5:0]_A, DDR_AC[24] / CS[0]_A, DDR_AC[26] /
CKE[0]_A, DDR_AC[27] / CS[1]_A, DDR_AC[29] / CKE[1]_A
ADDR_CTRL_B
CK1
DDR_AC[11:6] / CA[5:0]_B, DDR_AC[12] / CS[0]_B, DDR_AC[13]
/ CKE[0]_B, DDR_AC[14] / CS[1]_B, DDR_AC[15] / CKE[1]_B
Byte 0
DQS0
DDR_DQ[7:0], DDR_DM0
Byte 1
DQS1
DDR_DQ[15:8], DDR_DM1
Byte 2
DQS2
DDR_DQ[23:16], DDR_DM2
Byte 3
DQS3
DDR_DQ[31:24], DDR_DM3
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
29
LPDDR4 Board Design and Layout Guidance
3.8
www.ti.com
LPDDR4 Signal Termination
LPDDR4 memories have software configurable on-die termination for both the data group nets and the
address / control signals. The DDR subsystem also contains software configurable on-die termination for
the data group nets. Thus, termination is not required on any DDR signals for an LPDDR4 configuration.
3.9
LPDDR4 VREF Routing
LPDDR4 memories generate their own VREFCA and VREFDQ internally for the address / command bus
and data bus, respectively. Similarly, the DDR PHY also provides its own reference voltage for the data
group nets during reads. Thus unlike DDR3 and DDR4, VREF does not need to be generated on the
board, and there is no required VREF routing for an LPDDR4 configuration.
DDR_VREF0 and DDR_VREF_ZQ of the SOC should be left unconnected. They can be used for
observation only.
3.10 LPDDR4 VTT
Unlike DDR3 and DDR4, there is no required termination on the PCB of the address/control bus of an
LPDDR4 configuration. Thus, VTT does not apply for LPDDR4.
3.11 CK and ADDR_CTRL Topologies
The CK and ADDR_CTRL net classes are routed similarly, and are length matched from the DDR
controller in the processor to the LPDDR4 SDRAM to minimize skew between the signals and ensure that
the ADDR_CTRL signals are properly sampled at the SDRAM. The CK0 and CK1 net classes require
more care because they run at a higher transition rate and are differential. The CK and ADDR_CTRL
topology is point-to-point.
Figure 19 shows the topology of the CK0 and CK1 net classes, and Figure 20 shows the topology for the
corresponding ADDR_CTRL_A and ADDR_CTRL_B net classes. Length matching requirements for the
routing segments are detailed in Table 16.
Processor Differential
Clock Output Buffer
RSAC1
+
+
LPDDR4 Differential
Clock Input Buffer
±
±
RSAC1
Routed as differential pair
Figure 19. LPDDR4 CK Topology
Processor Address
and Control Ouptut
Buffer
LPDDR4 Address and
Control Input Buffer
RSAC2
Figure 20. LPDDR4 ADDR_CTRL Topology
Minimize layer transitions during routing. If a layer transition is necessary, it is preferable to transition to a
layer using the same reference plane. If this cannot be accommodated, ensure there are nearby stitching
vias to allow the return currents to transition between reference planes when both reference planes are
ground or VDDS_DDR. Alternately, ensure there are nearby bypass capacitors to allow the return currents
to transition between reference planes when one of the reference planes is ground and the other is
VDDS_DDR. This must occur at every reference plane transition. The goal is to minimize the size of the
return current path thus minimizing the inductance in this path. Lack of these stitching vias or capacitors
results in impedance discontinuities in the signal path that increase crosstalk and signal distortion.
30
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
LPDDR4 Board Design and Layout Guidance
www.ti.com
3.12 Data Group Topologies
The data line topology is always point-to-point for LPDDR4 implementations. Minimize layer transitions
during routing. If a layer transition is necessary, it is better to transition to a layer using the same reference
plane. If this cannot be accommodated, ensure there are nearby ground vias to allow the return currents
to transition between reference planes. The goal is to provide a low inductance path for the return current.
To optimize the length matching, TI recommends routing all nets within a single data routing group on one
layer where all nets have the exact same number of vias and the same via barrel length.
DQSP and DQSN lines are point-to-point signals routed as a differential pair. Figure 21 illustrates the
DQSP/N connection topology.
Processor DQS
IO Buffer
RSD1
+
+
LPDDR4 DQS IO
Buffer
±
±
RSD1
Routed as differential pair
Figure 21. LPDDR4 DQS Topology
DQ and DM lines are point-to-point signals routed as single-ended. Figure 22 illustrates the DQ and DM
connection topology.
Processor DQ and
DM IO Buffer
LPDDR4 DQ and
DM IO Buffer
RSD2
Figure 22. LPDDR4 DQ/DM Topology
There are no stubs or termination allowed on the nets of the data group topologies. All test and probe
access points must be in line without any branches or stubs.
3.13 CK and ADDR_CTRL Routing Specification
Skew within the CK and ADDR_CTRL net classes directly reduces setup and hold margin for the
ADDR_CTRL nets. Thus, this skew must be controlled. The routed PCB track has a delay proportional to
its length. Thus, the delay skew must be managed through matching the lengths of the routed tracks
within a defined group of signals. The only way to practically match lengths on a PCB is to lengthen the
shorter traces up to the length of the longest net in the net class and its associated clock.
Table 16 lists the limits for the individual segments that comprise the routing from the processor to the
SDRAM. These segment lengths coincide with the CK and ADDR_CTRL topology diagram shown
previously in Figure 19 and Figure 20. By matching the length for the same segments of all signals in a
routing group, the signal delay skews are controlled. Most PCB layout tools can be configured to generate
reports to assist with this validation. If this cannot be generated automatically, this must be generated and
verified manually.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
31
LPDDR4 Board Design and Layout Guidance
www.ti.com
Table 16. CK and ADDR_CTRL Routing Specifications
Number
Parameter
MAX
UNIT
LP4_AC
RS1
RSAC1 (CKn) Length
500 (1)
ps
LP4_AC
RS2
RSAC2 (ADDR_CTRL_n) Length
500 (1)
ps
LP4_AC
RS3
RSAC1 Skew (Within CK0 / CK1 nets)
0.4
ps
LP4_AC
RS4
RSAC1 Skew (Between CK0 and CK1 nets)
3
ps
LP4_AC
RS5
RSAC2 Skew (Within ADDR_CTRL_A / ADDR_CTRL_B nets)
3
ps
LP4_AC
RS6
RSAC1 to RSAC2 Skew (Between ADDR_CTRL_n signal net class and associated CKn
clock net class)
3
ps
LP4_AC
RS7
Vias per trace
3 (1)
vias
LP4_AC
RS8
Via count difference
1 (2)
vias
LP4_AC
RS9
Center-to-center CK to other LPDDR4 trace spacing (3)
4w
LP4_AC
RS10
Center-to-center ADDR_CTRL to other LPDDR4 trace spacing (3)
4w
LP4_AC
RS11
Center-to-center ADDR_CTRL to other ADDR_CTRL trace spacing (3)
3w
LP4_AC
RS12
CK center-to-center spacing (4) (5)
LP4_AC
RS13
CK spacing to other net (3)
(1)
(2)
(3)
(4)
(5)
MIN
4w
Max value is based upon conservative signal integrity approach. This value could be extended only if detailed signal integrity analysis of
rise time and fall time confirms desired operation.
Via count difference may increase by 1 only if accurate 3-D modeling of the signal flight times – including accurately modeled signal
propagation through vias – has been applied to ensure all segment skew maximums are not exceeded.
Center-to-center spacing is allowed to fall to minimum 2w for up to 500 mils of routed length (only near endpoints).
CK spacing set to ensure proper differential impedance.
The user must control the impedance so that inadvertent impedance mismatches are not created. Generally speaking, center-to center
spacing should be either 2w or slightly larger than 2w to achieve a differential impedance equal to twice the single-ended impedance, Zo,
on that layer.
3.14 Data Group Routing Specification
Skew within the Byte signal net class directly reduces the setup and hold margin for the DQ and DM nets.
Thus as with the ADDR_CTRL signal net class and associated CK clock net class, this skew must be
controlled. The routed PCB track has a delay proportional to its length. Thus, the length skew must be
managed through matching the lengths of the routed tracks within a defined group of signals. The only
way to practically match lengths on a PCB is to lengthen the shorter traces up to the length of the longest
net in the net class and its associated clock.
NOTE: It is not required nor recommended to match the lengths across all byte lanes. Length
matching is only required within each byte.
32
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
LPDDR4 Board Design and Layout Guidance
www.ti.com
Table 17 contains the routing specifications for the Byte0, Byte1, Byte2, and Byte3 routing groups. Each
signal net class and its associated clock net class is routed and matched independently.
Table 17. Data Group Routing Specifications
Number
Parameter
MIN
MAX
UNIT
LP4_DRS1 RSD1 (DQSn) Length
500
ps
LP4_DRS2 RSD2 (DMn / DQn) Length
500
ps
LP4_DRS3 RSD1 Skew (DQS+ to DQS- ; Within a clock net class)
0.4
ps
2
ps
LP4_DRS4 RSD1 to RSD2 Skew (DQS to DQ; Within a signal net class)
(1) (2)
LP4_DRS5 RSD2 Skew (DQ/DM to DQ/DM; Within a signal net class) (1)
LP4_DRS6 Vias Per Trace
LP4_DRS7 Via Count Difference
LP4_DRS8 RSD1 center-to-center spacing (between clock net class)
(5)
2
ps
2 (3)
vias
0 (4)
vias
4w
LP4_DRS9 RSD1 center-to-center spacing (within clock net class) (6) (7)
LP4_DRS1 RSD2 center-to-center spacing (between signal net class) (5)
0
4w
LP4_DRS1 RSD2 center-to-center spacing (within signal net class) (5)
1
3w
(1)
(2)
(3)
(4)
(5)
(6)
(7)
Length matching is only done within a byte. Length matching across bytes is neither required nor recommended.
Each DQS pair is length matched to its associated byte.
Max value is based upon conservative signal integrity approach. This value could be extended only if detailed signal integrity
analysis of rise time and fall time confirms desired operation.
Via count difference may increase by 1 only if accurate 3-D modeling of the signal flight times – including accurately modeled
signal propagation through vias – has been applied to ensure DQn skew and DQSn to DQn skew maximums are not exceeded.
Center-to-center spacing is allowed to fall to minimum 2w for up to 500 mils of routed length (only near endpoints).
DQS pair spacing is set to ensure proper differential impedance.
The user must control the impedance so that inadvertent impedance mismatches are not created. Generally speaking, center-to
center spacing should be either 2w or slightly larger than 2w to achieve a differential impedance equal to twice the single-ended
impedance, Zo, on that layer.
3.15 Channel, Byte, and Bit Swapping
All signals, including data and address / control, must be routed 1 to 1 from the DDR controller to the
LPDDR4 memory. Byte swapping across channels or within a channel is not allowed. Similarly, data bit
swapping across byte lanes or within a byte is also not allowed. In addition, byte lanes 0 and 1 of the DDR
controller must be routed to channel A of the LPDDR4 memory, and byte lanes 2 and 3 of the DDR
controller must be routed to channel B of the LPDDR4 memory.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
33
DDR3L Board Design and Layout Guidance
4
DDR3L Board Design and Layout Guidance
4.1
DDR3L Introduction
www.ti.com
The DDR EMIF on the AM65x/DRA80xM can be configured to support DDR3L SDRAMs. Features and
implementation techniques for DDR3L SDRAMs are well known at this time. Additionally, these SDRAMs
are fully mature and their cost low.
4.2
DDR3L Device Implementations Supported
There are several possible combinations of SDRAM devices supported by the DDR3L EMIF. These device
combinations match with those previously stated for DDR4. Thus, the contents of Table 4 is equally
applicable to DDR3L implementations. The SDRAMs used in each combination must be identical: that is,
they must have the same part number.
4.3
DDR3L Interface Schematics
This section discusses implementations using single-rank x16 and x8 SDRAM devices. This section does
not discuss recommendations for implementations that support low-power operation where the SDRAM is
held in self-refresh and the processor is powered off. It also does not discuss implementations using dualrank SDRAMs. This information will be added in the next version of this document.
4.3.1
DDR3L Implementation Using 16-bit SDRAM Devices
The DDR3L interface schematics vary, depending upon the width of the DDR3L SDRAM devices used,
the width of the EMIF bus implemented (16 or 32 bits), and whether ECC is implemented. General
connectivity is straightforward and consistent between the implementations. 16-bit SDRAM devices look
like two 8-bit devices. Figure 23 shows the schematic connections for a 32-bit interface with ECC using
x16 devices. If ECC is not desired, the third SDRAM is not implemented. Similarly, a 16-bit interface only
uses a single x16 SDRAM.
When not using one or more of the byte lanes on the processor, the proper method of handling the
unused pins is to tie off the unused DDR_DQSxP pins to ground through a 1k-Ω resistor and to tie off the
unused DDR_DQSxN pins to the VDDS_DDR supply, also referred to as the I/O supply VDDQ, through a
1k-Ω resistor. This must be done for each byte not used. Although these signals have internal pullups and
pulldowns, external pullups and pulldowns provide additional protection against external electrical noise
causing activity on the signals.
34
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR3L Board Design and Layout Guidance
www.ti.com
DQ15
NC
DQ8
NC
VDDS_DDR
UDM
UDQ S
UDQ S#
DQ7
DDR_ECC_D6/DQ38
DQ6
7
DDR_ECC_D0/DQ32
DQ0
DDR_ECC_DM/DM4
DDR_ECC_DQSP/DQS4
DDR_ECC_DQSN/DQS4_n
DDR_DQ31/DQ31
LDM
LDQS
LDQS#
DQ15
8
DDR_DQ24/DQ24
DQ8
DDR_DM3/DM3
DDR_DQS3P/DQS3
DDR_DQS3N/DQS3_n
DDR_DQ23/DQ23
UDM
UDQ S
UDQ S#
DQ7
8
DDR_DQ16/DQ16
DQ0
DDR_DM2/DM2
DDR_DQS2P/DQS2
DDR_DQS2N/DQS2_n
DDR_DQ15/DQ15
LDM
LDQS
LDQS#
8
DQ8
DDR_DQ8/DQ8
DDR_DM1/DM1
DDR_DQS1P/DQS1
DDR_DQS1N/DQS1_n
DDR_DQ7/DQ7
UDM
UDQ S
UDQ S#
8
DDR_AC0/A0
DQ7
DQ0
DDR_DQ0/DQ0
DDR_DM0/DM0
DDR_DQS0P/DQS0
DDR_DQS0N/DQS0_n
DDR_CK0P/CK0
DDR_CK0N/CK0_n
DDR_CK1P/CK1
DDR_CK1N/CK1_n
DQ15
LDM
LDQS
LDQS#
CK
CK#
CK
CK#
CK
CK#
VDDS_DDR
Zo
NC
NC
A0
A0
A0
A15
A15
A15
WE#
CAS#
RAS#
WE#
CAS#
RAS#
WE#
CAS#
RAS#
DDR_AC19/BA0
DDR_AC20/BA1
DDR_AC21/BA2
BA0
BA1
BA2
BA0
BA1
BA2
BA0
BA1
BA2
DDR_AC24/CS0_n
DDR_AC27/CS1_n
CS#
CS#
CS#
ODT
ODT
ODT
CKE
CKE
CKE
RESET#
RESET#
RESET#
14
DDR_AC15/A15
DDR_AC16/WE_n
DDR_AC17/CAS_n
DDR_AC18/RAS_n
Zo
Zo
Zo
NC
DDR_AC25/ODT0
DDR_AC28/ODT1
NC
DDR_AC26/CKE0
DDR_AC29/CKE1
NC
DDR_RESETn
DDR_FS_RESET
DDR_VREF0
DDR_VREF_ZQ
VTT
Zo
NC
NC / TP
NC / TP
VREFCA
VREFDQ
240
240
ZQ
DDR_VTP
VREFCA
VREFDQ
VREFCA
VREFDQ
ZQ
DDR VREF
240
ZQ
240 1%
20 mW
Figure 23. 32-Bit, Single-Rank DDR3L Implementation With ECC Using x16 SDRAMs
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
35
DDR3L Board Design and Layout Guidance
4.3.2
www.ti.com
DDR3L Implementation Using 8-Bit SDRAM Devices
Figure 24 shows the schematic connections for a 32-bit interface with ECC using x8 devices. If ECC is not
desired, the fifth SDRAM is not implemented. Similarly, a 16-bit interface only uses a pair of x8 SDRAMs
and a 16-bit interface with ECC uses three x8 SDRAMs.
DDR_ECC_D6/DQ38
DQ6
7
DDR_ECC_D0/DQ32
DQ0
DDR_ECC_DM/DM4
LDM
DQ7
LDQS
DDR_ECC_DQSP/DQS4
LDQS#
DDR_ECC_DQSN/DQS4_n
DDR_DQ31/DQ31
DQ7
8
DDR_DQ24/DQ24
DQ0
DDR_DM3/DM3
LDM
DDR_DQS3P/DQS3
LDQS
DDR_DQS3N/DQS3_n
DDR_DQ23/DQ23
LDQS#
DQ7
8
DDR_DQ16/DQ16
DQ0
DDR_DM2/DM2
LDM
LDQS
DDR_DQS2P/DQS2
LDQS#
DDR_DQS2N/DQS2_n
DDR_DQ15/DQ15
DQ7
8
DDR_DQ8/DQ8
DQ0
DDR_DM1/DM1
LDM
DDR_DQS1P/DQS1
LDQS
DDR_DQS1N/DQS1_n
DDR_DQ7/DQ7
LDQS#
8
DQ7
DDR_DQ0/DQ0
DQ0
DDR_DM0/DM0
LDM
DDR_DQS0P/DQS0
LDQS
DDR_DQS0N/DQS0_n
LDQS#
10
DDR_CK0P/CK0
DDR_CK0N/CK0_n
DDR_CK1P/CK1
NC
DDR_CK1N/CK1_n
NC
DDR_AC0/A0
14
DDR_AC15/A15
CK
CK
CK
CK
CK
CK#
CK#
CK#
CK#
CK#
VDDS_DDR
Zo
A0
A0
A0
A0
A0
A15
A15
A15
A15
A15
DDR_AC16/WE_n
WE#
WE#
WE#
WE#
WE#
DDR_AC17/CAS_n
CAS#
CAS#
CAS#
CAS#
CAS#
DDR_AC18/RAS_n
RAS#
RAS#
RAS#
RAS#
RAS#
DDR_AC19/BA0
BA0
BA0
BA0
BA0
BA0
DDR_AC20/BA1
BA1
BA1
BA1
BA1
BA1
DDR_AC21/BA2
BA2
BA2
BA2
BA2
BA2
CS#
CS#
CS#
CS#
CS#
ODT
ODT
ODT
ODT
ODT
CKE
CKE
CKE
CKE
CKE
RESET#
RESET#
RESET#
RESET#
RESET#
Zo
Zo
Zo
DDR_AC24/CS0_n
DDR_AC27/CS1_n
NC
VTT
DDR_AC25/ODT0
DDR_AC28/ODT1
NC
DDR_AC26/CKE0
DDR_AC29/CKE1
NC
DDR_RESETn
DDR_FS_RESET
Zo
NC
DDR_VREF0
NC / TP
VREFCA
VREFCA
VREFCA
VREFCA
VREFCA
DDR_VREF_ZQ
NC / TP
VREFDQ
VREFDQ
VREFDQ
VREFDQ
VREFDQ
ZQ
ZQ
ZQ
ZQ
ZQ
DDR_VTP
240 1%
20 mW
240
240
240
240
DDR VREF
240
Figure 24. 32-Bit, Single-Rank DDR3L Implementation With ECC Using x8 SDRAMs
36
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR3L Board Design and Layout Guidance
www.ti.com
4.4
Compatible JEDEC DDR3L Devices
Table 18 shows the parameters of the JEDEC DDR3L devices compatible with the DDR3L Controller and
PHY interface. Generally, the DDR3L interface is compatible with all JEDEC-compliant DDR3L SDRAM
devices in x8 or x16 widths.
Table 18. Compatible JEDEC DDR3L Devices
Number Parameter
(1)
(2)
(3)
4.5
MIN
MAX
UNIT
(1) (2)
1
JEDEC DDR3L data rate
2
JEDEC DDR3L device bit width
x8
x16
MT/s
Bits
3
JEDEC DDR3L device count (3)
1
5
Device
s
Refer to the device data manual for supported data rates.
SDRAMs in faster speed grades can be used provided they are properly configured to operate at the supported data rates.
Faster speed grade SDRAMs may have faster edge rates, which may affect signal integrity. SDRAMs with faster speed grades
must be validated on the target board design.
For valid DDR3L device configurations and device counts, see Figure 23 and Figure 24.
Placement
Processor and DDR3L SDRAM placement options are identical for DDR3L implementations to those
previously stated for DDR4 in Section 2.5. Therefore, Figure 3 shows the required placement for the
processor and the DDR3L devices, and the relevant dimensions are shown in Table 6.
4.6
DDR3L Keepout Region
Similar to the placement guidance provided above, the keepout region around DDR3L topologies is same
as for DDR4 topologies. Therefore, refer to Section 2.6 and Figure 4 for this guidance.
4.7
Net Classes
Routing rules are applied to signals in groups called net classes. Each net class contains signals with the
same routing requirements. This simplifies the implementation and compliance of these routes. Table 19
lists the clock net classes for the DDR3L interface. Table 20 lists the signal net classes, and associated
clock net classes, for signals in the DDR3L interface. These net classes are then linked to the termination
and routing rules that follow.
Table 19. Clock Net Class Definitions
Clock Net Class
Processor Pin Names
CK0
DDR_CK0P/CK0, DDR_CK0N/CK0_n
CK1
DDR_CK1P/CK1, DDR_CK1N/CK1_n
DQS0
DDR_DQS0P/DQS0, DDR_DQS0N/DQS0_n
DQS1
DDR_DQS1P/DQS1, DDR_DQS1N/DQS1_n
DQS2 (1)
DDR_DQS2P/DQS2, DDR_DQS2N/DQS2_n
DQS3 (1)
ECC DQS
(1)
(2)
DDR_DQS3P/DQS3, DDR_DQS3N/DQS3_n
(2)
DDR_ECC_DQSP/DQS4, DDR_ECC_DQSN/DQS4_n
Only used on 32-bit wide DDR3L memory systems.
Only used on DDR3L memory systems with ECC.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
37
DDR3L Board Design and Layout Guidance
www.ti.com
Table 20. Signal Net Class Definitions
Signal Net Class
Associated Clock Net Class Processor Pin Names
ADDR_CTRL
CK0 / CK1 (1)
DDR_AC[15:0]/A[15:0], DDR_AC16/WE_n, DDR_AC17/CAS_n,
DDR_AC18/RAS_n, DDR_AC[21:19]/BA[2:0], DDR_AC24/CS0_n,
DDR_AC25/ODT0, DDR_AC26/CKE0, DDR_AC27/CS1_n,
DDR_AC28/ODT1, DDR_AC29/CKE1
BYTE0
DQS0
DDR_DQ[7:0], DDR_DM0
BYTE1
DQS1
DDR_DQ[15:8], DDR_DM1
(2)
DQS2
DDR_DQ[23:16], DDR_DM2
BYTE3 (2)
DQS3
DDR_DQ[31:24], DDR_DM3
ECC BYTE (3)
ECC DQS
DDR_ECC_D[6:0]/DQ[38:32], DDR_ECC_DM/DM4
BYTE2
(1)
(2)
(3)
4.8
Although the CK1 clock group is independent from the CK0 clock group, all signals in this routing group will be associated to
CK0 for topologies supported. The CK1 clock group will be length matched with the CK0 clock group.
Only used on 32-bit wide DDR3L memory systems.
Only used on DDR3L memory systems with ECC.
DDR3L Signal Termination
Signal terminators are required for the CK0/1 and ADDR_CTRL net classes. This is shown in Figure 23
and Figure 24. The data group nets are terminated by ODT in the processor and SDRAM memories, and
thus the data group PCB traces must be unterminated. Detailed termination specifications are covered in
the routing rules in the following sections.
4.9
VREF Routing
The SSTL reference voltage, VREF, for DDR3L is needed for correct operation of all address, command,
control, and data signals. However, unlike previous processors supporting DDR3L, the processor’s PHY
generates the VREF internally for data sampling during read cycles. VREF must be generated and
supplied to the SDRAM devices for address, command, and control at the VREFCA input, and supplied to
the SDRAM devices for data sampling during write cycles on the VREFDQ input. VREF must be 50% of
the DDR power supply voltage (VDDS_DDR) and is typically generated with the DDR3L VTT power
supply. It should be routed as a nominal 20-mil wide trace with 0.1-μF bypass capacitors near each device
connection. Narrowing the VREF trace is allowed to accommodate routing congestion for short lengths
near endpoints.
4.10 VTT
As with VREF, the nominal value of the VTT supply is 50% of the DDR supply voltage. Unlike VREF, the
VTT supply is expected to source and sink current; specifically the termination current for the address,
command, and control terminators. VTT termination is needed at the end of the fly-by nets, and the VTT
supply should be routed as a power sub-plane. VTT must be bypassed near the terminator resistors.
4.11 CK and ADDR_CTRL Topologies and Routing Guidance
CK and ADDR_CTRL topologies and routing guidance for DDR3L are identical to that previously stated for
DDR4 in Section 2.13. Refer to the text and figures in that section for guidance when using the processor
with DDR3L memory.
4.12 Data Group Topologies and Routing Guidance
Data group topologies and routing guidance for DDR3L are identical to that previously stated for DDR4 in
Section 2.14. Refer to the text and figures in that section for guidance when using the processor with
DDR3L memory.
4.13 CK and ADDR_CTRL Routing Specification
CK and ADDR_CTRL routing requirements for DDR3L are identical to those previously stated for DDR4 in
Section 2.15. Refer to the text and tables in that section for guidance when using the processor with
DDR3L memory.
38
AM65x/DRA80xM DDR Board Design and Layout Guidelines
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
DDR3L Board Design and Layout Guidance
www.ti.com
4.14 Data Group Routing Specification
Data group routing requirements for DDR3L are identical to those previously stated for DDR4 in
Section 2.16. Refer to the text and tables in that section for guidance when using the processor with
DDR3L memory.
4.15 Bit Swapping
4.15.1
Data Bit Swapping
Data bit swapping is allowed to simplify routing, provided the bits swapped are within the same byte
group. However, the prime bit, the lowest numbered bit in each byte, must be connected to the
corresponding bit on the SDRAM without swapping. That is, bit 0, bit 8, bit16, and so forth. Also, the DM
and DQS bits must not be swapped.
4.15.2
Address and Control Bit Swapping
Bit swapping of the address or control bits is not allowed, as this breaks functionality.
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
AM65x/DRA80xM DDR Board Design and Layout Guidelines
Copyright © 2018–2019, Texas Instruments Incorporated
39
Revision History
www.ti.com
Revision History
NOTE: Page numbers for previous revisions may differ from page numbers in the current version.
Changes from Original (October 2018) to A Revision .................................................................................................... Page
•
•
•
•
•
•
•
•
•
•
•
•
•
40
Added DDR RESET section. ............................................................................................................ 7
Added Velocity Compensation section. ................................................................................................ 7
Updated DDR4 Interface Schematics section. ........................................................................................ 9
Updated 32-Bit, Single-Rank DDR4 Implementation With ECC Using x16 SDRAMs image. ................................. 10
Updated 32-Bit, Single-Rank DDR4 Implementation With ECC Using x8 SDRAMs image. ................................... 11
Updated VREF Routing section. ....................................................................................................... 15
Updated Data Group Topologies and Routing Guidance section. ................................................................ 18
Updated CK and ADDR_CTRL Routing Limits section. ............................................................................ 20
Updated CK and ADDR_CTRL Routing Specifications table. ..................................................................... 20
Updated Data Group Routing Limits section. ........................................................................................ 22
Updated Data Group Routing Specifications table. ................................................................................. 22
Updated 32-Bit, Single-Rank DDR3L Implementation With ECC Using x16 SDRAMs image. ................................ 35
Updated 32-Bit, Single-Rank DDR3L Implementation With ECC Using x8 SDRAMs image. ................................. 36
Revision History
SPRACI2A – October 2018 – Revised March 2019
Submit Documentation Feedback
Copyright © 2018–2019, Texas Instruments Incorporated
IMPORTANT NOTICE AND DISCLAIMER
TI PROVIDES TECHNICAL AND RELIABILITY DATA (INCLUDING DATASHEETS), DESIGN RESOURCES (INCLUDING REFERENCE
DESIGNS), APPLICATION OR OTHER DESIGN ADVICE, WEB TOOLS, SAFETY INFORMATION, AND OTHER RESOURCES “AS IS”
AND WITH ALL FAULTS, AND DISCLAIMS ALL WARRANTIES, EXPRESS AND IMPLIED, INCLUDING WITHOUT LIMITATION ANY
IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT OF THIRD
PARTY INTELLECTUAL PROPERTY RIGHTS.
These resources are intended for skilled developers designing with TI products. You are solely responsible for (1) selecting the appropriate
TI products for your application, (2) designing, validating and testing your application, and (3) ensuring your application meets applicable
standards, and any other safety, security, or other requirements. These resources are subject to change without notice. TI grants you
permission to use these resources only for development of an application that uses the TI products described in the resource. Other
reproduction and display of these resources is prohibited. No license is granted to any other TI intellectual property right or to any third
party intellectual property right. TI disclaims responsibility for, and you will fully indemnify TI and its representatives against, any claims,
damages, costs, losses, and liabilities arising out of your use of these resources.
TI’s products are provided subject to TI’s Terms of Sale (www.ti.com/legal/termsofsale.html) or other applicable terms available either on
ti.com or provided in conjunction with such TI products. TI’s provision of these resources does not expand or otherwise alter TI’s applicable
warranties or warranty disclaimers for TI products.
Mailing Address: Texas Instruments, Post Office Box 655303, Dallas, Texas 75265
Copyright © 2019, Texas Instruments Incorporated
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertising