External Memory Interface Handbook Volume 2: Design

External Memory Interface Handbook Volume 2: Design
External Memory Interface
Handbook Volume 2: Design
Guidelines
EMI_DG
2017.05.08
Last updated for Intel® Quartus® Prime Design Suite: 17.0
Subscribe
Send Feedback
Contents
Contents
1 Planning Pin and FPGA Resources.................................................................................... 9
1.1 Interface Pins.........................................................................................................9
1.1.1 Estimating Pin Requirements...................................................................... 12
1.1.2 DDR, DDR2, DDR3, and DDR4 SDRAM Clock Signals..................................... 13
1.1.3 DDR, DDR2, DDR3, and DDR4 SDRAM Command and Address Signals............ 13
1.1.4 DDR, DDR2, DDR3, and DDR4 SDRAM Data, Data Strobes, DM/DBI, and
Optional ECC Signals................................................................................ 14
1.1.5 DDR, DDR2, DDR3, and DDR4 SDRAM DIMM Options....................................15
1.1.6 QDR II, QDR II+, and QDR II+ Xtreme SRAM Clock Signals........................... 18
1.1.7 QDR II, QDR II+ and QDR II+ Xtreme SRAM Command Signals..................... 19
1.1.8 QDR II, QDR II+ and QDR II+ Xtreme SRAM Address Signals........................ 19
1.1.9 QDR II, QDR II+ and QDR II+ Xtreme SRAM Data, BWS, and QVLD Signals.....20
1.1.10 QDR IV SRAM Clock Signals...................................................................... 20
1.1.11 QDR IV SRAM Commands and Addresses, AP, and AINV Signals.....................21
1.1.12 QDR IV SRAM Data, DINV, and QVLD Signals.............................................. 22
1.1.13 RLDRAM II and RLDRAM 3 Clock Signals....................................................23
1.1.14 RLDRAM II and RLDRAM 3 Commands and Addresses................................. 24
1.1.15 RLDRAM II and RLDRAM 3 Data, DM and QVLD Signals............................... 24
1.1.16 LPDDR2 and LPDDR3 Clock Signal............................................................ 25
1.1.17 LPDDR2 and LPDDR3 Command and Address Signal....................................26
1.1.18 LPDDR2 and LPDDR3 Data, Data Strobe, and DM Signals.............................26
1.1.19 Maximum Number of Interfaces................................................................ 26
1.1.20 OCT Support ..........................................................................................37
1.2 Guidelines for Intel Arria® 10 External Memory Interface IP........................................38
1.2.1 General Pin-Out Guidelines for Arria 10 EMIF IP............................................ 38
1.2.2 Resource Sharing Guidelines for Arria 10 EMIF IP.......................................... 43
1.3 Guidelines for Intel Stratix® 10 External Memory Interface IP..................................... 45
1.3.1 General Pin-Out Guidelines for Stratix 10 EMIF IP..........................................45
1.3.2 Resource Sharing Guidelines for Stratix 10 EMIF IP........................................50
1.4 Guidelines for UniPHY-based External Memory Interface IP......................................... 51
1.4.1 General Pin-out Guidelines for UniPHY-based External Memory Interface IP.......51
1.4.2 Pin-out Rule Exceptions for ×36 Emulated QDR II and QDR II+ SRAM
Interfaces in Arria II, Stratix III and Stratix IV Devices................................. 54
1.4.3 Pin-out Rule Exceptions for RLDRAM II and RLDRAM 3 Interfaces.................... 59
1.4.4 Pin-out Rule Exceptions for QDR II and QDR II+ SRAM Burst-length-of-two
Interfaces............................................................................................... 61
1.4.5 Pin Connection Guidelines Tables.................................................................62
1.4.6 PLLs and Clock Networks........................................................................... 72
1.5 Using PLL Guidelines............................................................................................. 76
1.6 PLL Cascading...................................................................................................... 77
1.7 DLL.....................................................................................................................78
1.8 Other FPGA Resources........................................................................................... 79
1.9 Document Revision History.....................................................................................80
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines.............................................. 83
2.1 Leveling and Dynamic Termination.......................................................................... 84
2.1.1 Read and Write Leveling............................................................................ 84
2.1.2 Dynamic ODT........................................................................................... 86
External Memory Interface Handbook Volume 2: Design Guidelines
2
Contents
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.1.3 Dynamic On-Chip Termination.................................................................... 86
2.1.4 Dynamic On-Chip Termination in Stratix III and Stratix IV Devices...................87
2.1.5 Dynamic OCT in Stratix V Devices............................................................... 89
2.1.6 Dynamic On-Chip Termination (OCT) in Arria 10 and Stratix 10 Devices........... 89
DDR2 Terminations and Guidelines.......................................................................... 89
2.2.1 Termination for DDR2 SDRAM..................................................................... 89
2.2.2 DDR2 Design Layout Guidelines.................................................................. 95
2.2.3 General Layout Guidelines..........................................................................96
2.2.4 Layout Guidelines for DDR2 SDRAM Interface............................................... 96
DDR3 Terminations in Arria V, Cyclone V, Stratix III, Stratix IV, and Stratix V............... 99
2.3.1 Terminations for Single-Rank DDR3 SDRAM Unbuffered DIMM....................... 100
2.3.2 Terminations for Multi-Rank DDR3 SDRAM Unbuffered DIMM......................... 101
2.3.3 Terminations for DDR3 SDRAM Registered DIMM......................................... 102
2.3.4 Terminations for DDR3 SDRAM Load-Reduced DIMM................................... 102
2.3.5 Terminations for DDR3 SDRAM Components With Leveling........................... 103
DDR3 and DDR4 on Arria 10 and Stratix 10 Devices.................................................103
2.4.1 Dynamic On-Chip Termination (OCT) in Arria 10 and Stratix 10 Devices..........104
2.4.2 Dynamic On-Die Termination (ODT) in DDR4...............................................104
2.4.3 Choosing Terminations on Arria 10 Devices................................................. 104
2.4.4 On-Chip Termination Recommendations for DDR3 and DDR4 on Arria 10
Devices.................................................................................................105
Layout Approach................................................................................................. 105
Channel Signal Integrity Measurement................................................................... 106
2.6.1 Importance of Accurate Channel Signal Integrity Information........................ 106
2.6.2 Understanding Channel Signal Integrity Measurement.................................. 106
2.6.3 How to Enter Calculated Channel Signal Integrity Values.............................. 107
2.6.4 Guidelines for Calculating DDR3 Channel Signal Integrity..............................108
2.6.5 Guidelines for Calculating DDR4 Channel Signal Integrity..............................110
Design Layout Guidelines..................................................................................... 113
2.7.1 General Layout Guidelines........................................................................ 114
2.7.2 Layout Guidelines for DDR3 and DDR4 SDRAM Interfaces............................. 115
2.7.3 Length Matching Rules............................................................................. 118
2.7.4 Spacing Guidelines.................................................................................. 119
2.7.5 Layout Guidelines for DDR3 and DDR4 SDRAM Wide Interface (>72 bits)........120
Package Deskew................................................................................................. 123
2.8.1 Package Deskew Recommendation for Stratix V Devices............................... 123
2.8.2 DQ/DQS/DM Deskew............................................................................... 124
2.8.3 Address and Command Deskew.................................................................124
2.8.4 Package Deskew Recommendations for Arria 10 and Stratix 10 Devices.......... 124
2.8.5 Deskew Example.....................................................................................125
2.8.6 Package Migration................................................................................... 126
2.8.7 Package Deskew for RLDRAM II and RLDRAM 3........................................... 127
Document Revision History................................................................................... 127
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines......................................130
3.1 General Layout Guidelines.................................................................................... 130
3.2 Dual-Slot Unbuffered DDR2 SDRAM....................................................................... 131
3.2.1 Overview of ODT Control.......................................................................... 132
3.2.2 DIMM Configuration................................................................................. 133
3.2.3 Dual-DIMM Memory Interface with Slot 1 Populated..................................... 134
3.2.4 Dual-DIMM with Slot 2 Populated.............................................................. 135
External Memory Interface Handbook Volume 2: Design Guidelines
3
Contents
3.2.5 Dual-DIMM Memory Interface with Both Slot 1 and Slot 2 Populated.............. 136
3.2.6 Dual-DIMM DDR2 Clock, Address, and Command Termination and Topology.... 139
3.2.7 Control Group Signals.............................................................................. 140
3.2.8 Clock Group Signals.................................................................................140
3.3 Dual-Slot Unbuffered DDR3 SDRAM...................................................................... 140
3.3.1 Comparison of DDR3 and DDR2 DQ and DQS ODT Features and Topology....... 141
3.3.2 Dual-DIMM DDR3 Clock, Address, and Command Termination and Topology.... 141
3.3.3 FPGA OCT Features................................................................................. 142
3.4 Document Revision History................................................................................... 143
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines................................................ 144
4.1 LPDDR2 Guidance............................................................................................... 144
4.1.1 LPDDR2 SDRAM Configurations................................................................. 144
4.1.2 OCT Signal Terminations for Arria V and Cyclone V Devices........................... 147
4.1.3 General Layout Guidelines........................................................................ 150
4.1.4 LPDDR2 Layout Guidelines........................................................................150
4.2 LPDDR3 Guidance............................................................................................... 152
4.2.1 Signal Integrity, Board Skew, and Board Setting Parameters......................... 153
4.2.2 LPDDR3 Layout Guidelines........................................................................153
4.2.3 Package Deskew..................................................................................... 153
4.3 Document Revision History................................................................................... 157
5 RLDRAM II and RLDRAM 3 Board Design Guidelines.................................................... 158
5.1 RLDRAM II Configurations.................................................................................... 159
5.2 RLDRAM 3 Configurations..................................................................................... 161
5.3 Signal Terminations............................................................................................. 162
5.3.1 Input to the FPGA from the RLDRAM Components........................................164
5.3.2 Outputs from the FPGA to the RLDRAM II and RLDRAM 3 Components........... 164
5.3.3 RLDRAM II Termination Schemes...............................................................165
5.3.4 RLDRAM 3 Termination Schemes............................................................... 165
5.4 PCB Layout Guidelines......................................................................................... 166
5.5 General Layout Guidelines.................................................................................... 167
5.6 RLDRAM II and RLDRAM 3 Layout Guidelines.......................................................... 167
5.7 Layout Approach................................................................................................. 169
5.7.1 Arria V and Stratix V Board Setting Parameters........................................... 170
5.7.2 Arria 10 Board Setting Parameters.............................................................170
5.8 Package Deskew for RLDRAM II and RLDRAM 3....................................................... 170
5.9 Document Revision History................................................................................... 171
6 QDR II and QDR-IV SRAM Board Design Guidelines..................................................... 172
6.1 QDR II SRAM Configurations.................................................................................173
6.2 Signal Terminations............................................................................................. 174
6.2.1 Output from the FPGA to the QDR II SRAM Component................................ 176
6.2.2 Input to the FPGA from the QDR II SRAM Component.................................. 176
6.2.3 Termination Schemes...............................................................................176
6.3 General Layout Guidelines.................................................................................... 179
6.4 QDR II Layout Guidelines..................................................................................... 179
6.5 QDR II SRAM Layout Approach..............................................................................181
6.6 Package Deskew for QDR II and QDR-IV.................................................................182
6.7 QDR-IV Layout Approach......................................................................................182
6.8 QDR-IV Layout Guidelines.................................................................................... 183
6.9 Document Revision History................................................................................... 184
External Memory Interface Handbook Volume 2: Design Guidelines
4
Contents
7 Implementing and Parameterizing Memory IP............................................................. 186
7.1 Installing and Licensing IP Cores........................................................................... 186
7.2 Design Flow........................................................................................................187
7.2.1 IP Catalog Design Flow............................................................................ 188
7.2.2 Qsys System Integration Tool Design Flow.................................................. 195
7.3 UniPHY-Based External Memory Interface IP........................................................... 197
7.3.1 Qsys Interfaces.......................................................................................198
7.3.2 Generated Files for Memory Controllers with the UniPHY IP........................... 217
7.3.3 Parameterizing Memory Controllers............................................................220
7.3.4 Board Settings ....................................................................................... 234
7.3.5 Controller Settings for UniPHY IP............................................................... 245
7.3.6 Diagnostics for UniPHY IP......................................................................... 249
7.4 Intel Arria 10 External Memory Interface IP............................................................ 250
7.4.1 Qsys Interfaces.......................................................................................250
7.4.2 Generated Files for Arria 10 External Memory Interface IP............................ 262
7.4.3 Arria 10 EMIF IP DDR4 Parameters............................................................ 266
7.4.4 Arria 10 EMIF IP DDR3 Parameters............................................................ 288
7.4.5 Arria 10 EMIF IP LPDDR3 Parameters......................................................... 307
7.4.6 Arria 10 EMIF IP QDR-IV Parameters..........................................................322
7.4.7 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters.................................... 334
7.4.8 Arria 10 EMIF IP RLDRAM 3 Parameters......................................................345
7.4.9 Equations for Arria 10 EMIF IP Board Skew Parameters................................ 357
7.5 Intel Stratix 10 External Memory Interface IP......................................................... 363
7.5.1 Qsys Interfaces.......................................................................................363
7.5.2 Generated Files for Stratix 10 External Memory Interface IP..........................374
7.5.3 Stratix 10 EMIF IP DDR4 Parameters..........................................................379
7.5.4 Stratix 10 EMIF IP DDR3 Parameters..........................................................399
7.5.5 Stratix 10 EMIF IP LPDDR3 Parameters...................................................... 415
7.5.6 Stratix 10 EMIF IP QDR-IV Parameters....................................................... 430
7.5.7 Stratix 10 EMIF IP QDR II/II+/II+ Xtreme Parameters................................. 442
7.5.8 Stratix 10 EMIF IP RLDRAM 3 Parameters................................................... 453
7.6 Document Revision History................................................................................... 465
8 Simulating Memory IP..................................................................................................470
8.1 Simulation Options.............................................................................................. 470
8.2 Simulation Walkthrough with UniPHY IP..................................................................472
8.2.1 Simulation Scripts................................................................................... 473
8.2.2 Preparing the Vendor Memory Model.......................................................... 473
8.2.3 Functional Simulation with Verilog HDL.......................................................476
8.2.4 Functional Simulation with VHDL............................................................... 476
8.2.5 Simulating the Example Design................................................................. 477
8.2.6 UniPHY Abstract PHY Simulation................................................................ 479
8.2.7 PHY-Only Simulation................................................................................480
8.2.8 Post-fit Functional Simulation....................................................................481
8.2.9 Simulation Issues....................................................................................483
8.3 Simulation Walkthrough with Arria 10 EMIF IP.........................................................485
8.3.1 Skip Calibration Versus Full Calibration.......................................................485
8.3.2 Arria 10 Abstract PHY Simulation...............................................................486
8.3.3 Simulation Scripts................................................................................... 487
8.3.4 Functional Simulation with Verilog HDL.......................................................487
External Memory Interface Handbook Volume 2: Design Guidelines
5
Contents
8.3.5 Functional Simulation with VHDL............................................................... 488
8.3.6 Simulating the Example Design................................................................. 489
8.4 Simulation Walkthrough with Stratix 10 EMIF IP...................................................... 490
8.4.1 Skip Calibration Versus Full Calibration.......................................................491
8.4.2 Simulation Scripts................................................................................... 492
8.4.3 Functional Simulation with Verilog HDL.......................................................492
8.4.4 Functional Simulation with VHDL............................................................... 493
8.4.5 Simulating the Example Design................................................................. 493
8.5 Document Revision History................................................................................... 495
9 Analyzing Timing of Memory IP................................................................................... 496
9.1 Memory Interface Timing Components................................................................... 497
9.1.1 Source-Synchronous Paths....................................................................... 497
9.1.2 Calibrated Paths......................................................................................497
9.1.3 Internal FPGA Timing Paths...................................................................... 498
9.1.4 Other FPGA Timing Parameters................................................................. 498
9.2 FPGA Timing Paths.............................................................................................. 499
9.2.1 Arria II Device PHY Timing Paths............................................................... 499
9.2.2 Stratix III and Stratix IV PHY Timing Paths................................................. 501
9.2.3 Arria V, Arria V GZ, Arria 10, Cyclone V, and Stratix V Timing paths................503
9.3 Timing Constraint and Report Files for UniPHY IP..................................................... 505
9.4 Timing Constraint and Report Files for Arria 10 EMIF IP............................................ 507
9.5 Timing Constraint and Report Files for Stratix 10 EMIF IP......................................... 509
9.6 Timing Analysis Description ................................................................................. 510
9.6.1 UniPHY IP Timing Analysis........................................................................ 511
9.6.2 Timing Analysis Description for Arria 10 EMIF IP.......................................... 520
9.6.3 Timing Analysis Description for Stratix 10 EMIF IP....................................... 523
9.7 Timing Report DDR..............................................................................................527
9.8 Report SDC........................................................................................................ 530
9.9 Calibration Effect in Timing Analysis.......................................................................530
9.9.1 Calibration Emulation for Calibrated Path.................................................... 531
9.9.2 Calibration Error or Quantization Error....................................................... 531
9.9.3 Calibration Uncertainties.......................................................................... 531
9.9.4 Memory Calibration................................................................................. 531
9.10 Timing Model Assumptions and Design Rules.........................................................532
9.10.1 Memory Clock Output Assumptions.......................................................... 533
9.10.2 Write Data Assumptions......................................................................... 534
9.10.3 Read Data Assumptions..........................................................................536
9.10.4 DLL Assumptions...................................................................................537
9.10.5 PLL and Clock Network Assumptions for Stratix III Devices......................... 537
9.11 Common Timing Closure Issues...........................................................................537
9.11.1 Missing Timing Margin Report..................................................................538
9.11.2 Incomplete Timing Margin Report............................................................ 538
9.11.3 Read Capture Timing............................................................................. 538
9.11.4 Write Timing......................................................................................... 538
9.11.5 Address and Command Timing................................................................ 539
9.11.6 PHY Reset Recovery and Removal............................................................ 539
9.11.7 Clock-to-Strobe (for DDR and DDR2 SDRAM Only)..................................... 540
9.11.8 Read Resynchronization and Write Leveling Timing (for SDRAM Only)........... 540
9.12 Optimizing Timing............................................................................................. 540
External Memory Interface Handbook Volume 2: Design Guidelines
6
Contents
9.13 Timing Deration Methodology for Multiple Chip Select DDR2 and DDR3 SDRAM
Designs..........................................................................................................542
9.13.1 Multiple Chip Select Configuration Effects..................................................543
9.13.2 Timing Deration using the Board Settings................................................. 545
9.14 Early I/O Timing Estimation for Arria 10 EMIF IP....................................................548
9.14.1 Performing Early I/O Timing Analysis for Arria 10 EMIF IP........................... 549
9.15 Early I/O Timing Estimation for Stratix 10 EMIF IP................................................. 550
9.15.1 Performing Early I/O Timing Analysis for Stratix 10 EMIF IP........................ 550
9.16 Performing I/O Timing Analysis........................................................................... 551
9.16.1 Performing I/O Timing Analysis with Third-Party Simulation Tools.................552
9.16.2 Performing Advanced I/O Timing Analysis with Board Trace Delay Model....... 552
9.17 Document Revision History................................................................................. 553
10 Debugging Memory IP................................................................................................555
10.1 Resource and Planning Issues............................................................................. 555
10.1.1 Dedicated IOE DQS Group Resources and Pins...........................................555
10.1.2 Dedicated DLL Resources........................................................................556
10.1.3 Specific PLL Resources........................................................................... 557
10.1.4 Specific Global, Regional and Dual-Regional Clock Net Resources................. 557
10.1.5 Planning Your Design............................................................................. 557
10.1.6 Optimizing Design Utilization...................................................................558
10.2 Interface Configuration Performance Issues.......................................................... 558
10.2.1 Interface Configuration Bottleneck and Efficiency Issues............................. 559
10.3 Functional Issue Evaluation.................................................................................560
10.3.1 Correct Combination of the Quartus Prime Software and ModelSim - Intel
FPGA Edition Device Models..................................................................... 560
10.3.2 Intel IP Memory Model........................................................................... 561
10.3.3 Vendor Memory Model............................................................................561
10.3.4 Insufficient Memory in Your PC................................................................ 561
10.3.5 Transcript Window Messages................................................................... 562
10.3.6 Passing Simulation.................................................................................563
10.3.7 Modifying the Example Driver to Replicate the Failure................................. 563
10.4 Timing Issue Characteristics............................................................................... 564
10.4.1 Evaluating FPGA Timing Issues................................................................565
10.4.2 Evaluating External Memory Interface Timing Issues................................. 566
10.5 Verifying Memory IP Using the Signal Tap II Logic Analyzer..................................... 567
10.5.1 Signals to Monitor with the Signal Tap II Logic Analyzer.............................. 568
10.6 Hardware Debugging Guidelines.......................................................................... 569
10.6.1 Create a Simplified Design that Demonstrates the Same Issue.................... 569
10.6.2 Measure Power Distribution Network........................................................ 569
10.6.3 Measure Signal Integrity and Setup and Hold Margin.................................. 569
10.6.4 Vary Voltage......................................................................................... 570
10.6.5 Use Freezer Spray and Heat Gun............................................................. 570
10.6.6 Operate at a Lower Speed...................................................................... 570
10.6.7 Determine Whether the Issue Exists in Previous Versions of Software........... 570
10.6.8 Determine Whether the Issue Exists in the Current Version of Software........ 571
10.6.9 Try A Different PCB................................................................................ 571
10.6.10 Try Other Configurations.......................................................................572
10.6.11 Debugging Checklist.............................................................................572
10.7 Catagorizing Hardware Issues............................................................................. 573
10.7.1 Signal Integrity Issues........................................................................... 573
External Memory Interface Handbook Volume 2: Design Guidelines
7
Contents
10.7.2 Hardware and Calibration Issues..............................................................575
10.8 EMIF Debug Toolkit Overview.............................................................................. 579
10.9 Document Revision History................................................................................. 579
11 Optimizing the Controller........................................................................................... 581
11.1 Factors Affecting Efficiency................................................................................. 581
11.1.1 Interface Standard................................................................................ 582
11.1.2 Bank Management Efficiency...................................................................582
11.1.3 Data Transfer........................................................................................ 584
11.2 Ways to Improve Efficiency.................................................................................585
11.2.1 DDR2 SDRAM Controller......................................................................... 586
11.2.2 Auto-Precharge Commands.....................................................................586
11.2.3 Additive Latency....................................................................................588
11.2.4 Bank Interleaving.................................................................................. 589
11.2.5 Command Queue Look-Ahead Depth........................................................ 591
11.2.6 Additive Latency and Bank Interleaving.................................................... 592
11.2.7 User-Controlled Refresh......................................................................... 593
11.2.8 Frequency of Operation.......................................................................... 594
11.2.9 Burst Length.........................................................................................595
11.2.10 Series of Reads or Writes...................................................................... 595
11.2.11 Data Reordering.................................................................................. 595
11.2.12 Starvation Control................................................................................596
11.2.13 Command Reordering...........................................................................597
11.2.14 Bandwidth.......................................................................................... 598
11.2.15 Efficiency Monitor................................................................................ 599
11.3 Document Revision History................................................................................. 600
12 PHY Considerations....................................................................................................602
12.1
12.2
12.3
12.4
12.5
12.6
Core Logic and User Interface Data Rate...............................................................602
Hard and Soft Memory PHY.................................................................................603
Sequencer........................................................................................................603
PLL, DLL and OCT Resource Sharing.....................................................................604
Pin Placement Consideration............................................................................... 605
Document Revision History................................................................................. 606
13 Power Estimation Methods for External Memory Interfaces....................................... 608
13.1 Performing Vector-Based Power Analysis with the Power Analyzer............................ 609
13.2 Document Revision History................................................................................. 609
External Memory Interface Handbook Volume 2: Design Guidelines
8
1 Planning Pin and FPGA Resources
1 Planning Pin and FPGA Resources
This information is for board designers who must determine FPGA pin usage, to create
board layouts. The board design process sometimes occurs concurrently with the RTL
design process.
Use this document with the External Memory Interfaces chapter of the relevant device
family handbook.
Typically, all external memory interfaces require the following FPGA resources:
•
Interface pins
•
PLL and clock network
•
DLL
•
Other FPGA resources—for example, core fabric logic, and on-chip termination
(OCT) calibration blocks
After you know the requirements for your external memory interface, you can start
planning your system. The I/O pins and internal memory cannot be shared for other
applications or external memory interfaces. However, if you do not have enough PLLs,
DLLs, or clock networks for your application, you may share these resources among
multiple external memory interfaces or modules in your system.
Ideally, any interface should reside entirely in a single bank; however, interfaces that
span multiple adjacent banks or the entire side of a device are also fully supported. In
addition, you may also have wraparound memory interfaces, where the design uses
two adjacent sides of the device and the memory interface logic resides in a device
quadrant. In some cases, top or bottom bank interfaces have higher supported clock
rates than left or right or wraparound interfaces.
1.1 Interface Pins
Any I/O banks that do not support transceiver operations in Arria® II, Arria V, Arria
10, Stratix® III, Stratix IV, and Stratix V devices support external memory interfaces.
However, DQS (data strobe or data clock) and DQ (data) pins are listed in the device
pin tables and fixed at specific locations in the device. You must adhere to these pin
locations as these locations are optimized in routing to minimize skew and maximize
margin. Always check the external memory interfaces chapters from the device
handbooks for the number of DQS and DQ groups supported in a particular device and
the pin table for the actual locations of the DQS and DQ pins.
The following table lists a summary of the number of pins required for various
example memory interfaces. This table uses series OCT with calibration and parallel
OCT with calibration, or dynamic calibrated OCT, when applicable, shown by the usage
of RUP and RDN pins or RZQ pin.
Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus
and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other
countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in
accordance with Intel's standard warranty, but reserves the right to make changes to any products and services
at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any
information, product, or service described herein except as expressly agreed to in writing by Intel. Intel
customers are advised to obtain the latest version of device specifications before relying on any published
information and before placing orders for products or services.
*Other names and brands may be claimed as the property of others.
ISO
9001:2008
Registered
1 Planning Pin and FPGA Resources
Table 1.
Extern
al
Memor
y
Interfa
ce
LPDDR2
LPDDR3
DDR4
SDRAM
(12)
Pin Counts for Various Example External Memory Interfaces
FPGA
DQS
Group
Size
×8
×8
Numbe
r of DQ
Pins
Numbe
r of
DQS/C
Q/QK
Pins
Numbe
r of
Contro
l Pins
Numbe
r of
Addres
s Pins
Numbe
r of
Comm
and
Pins
Numbe
r of
Clock
Pins
RUP/RD
8
2
1
10
2
2
N/A
1
N/A
26
16
4
2
10
2
2
N/A
1
N/A
37
72
18
9
10
2
2
N/A
1
N/A
114
16
4
2
10
2
2
N/A
1
N/A
37
72
18
9
10
2
2
N/A
1
N/A
114
17
11
2
N/A
1
N/A
37
17
11
2
N/A
1
N/A
42
2
N/A
1
N/A
52
(5) (6)
DDR2
SDRAM
(8)
DDR
SDRAM
(6)
(19)
x4
4
2
0
x8
8
2
1
16
DDR3
SDRAM
(1) (2)
4
(7)
2
×4
4
2
0
×8
8
2
1
16
4
2
1
(7)
(7)
(3)
(13)
RZQ
Pins
N
(11)
Pins
(4)
Total
Pins
(with
RUP/RD
N pins)
Total
Pins
(with
RZQ
pin)
17
10
14
10
2
2
1
34
33
14
10
2
2
1
39
38
14
10
2
2
1
50
49
×4
4
1
15
9
2
2
1
34
33
×8
8
1
(9)
1
15
9
2
2
1
38
37
16
2
(9)
2
15
9
2
2
1
48
47
×4
4
1
1
14
7
2
2
1
29
28
×8
8
1
1
14
7
2
2
1
33
35
16
2
2
14
7
2
2
1
43
42
(7)
QDR II
+ / II+
Xtreme
SRAM
×18
36
2
2
19
3
(10)
2
(15)
2
1
66
65
×36
72
2
4
18
3
(10)
2
(15)
2
1
103
102
QDR II
SRAM
×9
18
2
1
19
2
4
(16)
2
1
48
47
4
(16)
2
1
66
65
4
(16)
(18)
×18
×36
QDR IV
SRAM
(20)
RLDRA
M3
CIO (14)
RLDRA
M
II CIO
x18
x36
x9
36
72
36
72
18
36
×9
9
2
2
8
8
4
8
2
2
4
5
5
2
2
1
18
17
22
21
20
19
22
2
2
7
7
8
(10)
8
(10)
7
(10)
2
1
103
102
10
(17)
N/A
1
N/A
89
10
(17)
N/A
1
N/A
124
6
(17)
N/A
1
N/A
59
6
(17)
N/A
1
N/A
80
4
(17)
2
1
47
46
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
10
1 Planning Pin and FPGA Resources
Extern
al
Memor
y
Interfa
ce
FPGA
DQS
Group
Size
Numbe
r of DQ
Pins
18
×18
36
Numbe
r of
DQS/C
Q/QK
Pins
Numbe
r of
Contro
l Pins
Numbe
r of
Addres
s Pins
Numbe
r of
Comm
and
Pins
4
1
21
7
(10)
7
(10)
4
(19)
1
(3)
20
Numbe
r of
Clock
Pins
RUP/RD
RZQ
Pins
N
(11)
Pins
(4)
Total
Pins
(with
RUP/RD
N pins)
Total
Pins
(with
RZQ
pin)
4
(17)
2
1
57
56
6
(17)
2
1
76
75
Notes to table:
1. These example pin counts are derived from memory vendor data sheets. Check the exact number of addresses and
command pins of the memory devices in the configuration that you are using.
2. PLL and DLL input reference clock pins are not counted in this calculation.
3. The number of address pins depends on the memory device density.
4. Some DQS or DQ pins are dual purpose and can also be required as RUP, RDN, or configuration pins. A DQS group is lost
if you use these pins for configuration or as RUP or RDN pins for calibrated OCT. Pick RUP and RDN pins in a DQS group
that is not used for memory interface purposes. You may need to place the DQS and DQ pins manually if you place the
RUP and RDN pins in the same DQS group pins.
5. The TDQS and TDQS# pins are not counted in this calculation, as these pins are not used in the memory controller.
6. Numbers are based on 1-GB memory devices.
7. Intel® FPGAs do not support DM pins in ×4 mode with differential DQS signaling.
8. Numbers are based on 2-GB memory devices without using differential DQS, RDQS, and RDQS# pin support.
9. Assumes single ended DQS mode. DDR2 SDRAM also supports differential DQS, which makes these DQS and DM
numbers identical to DDR3 SDRAM.
10. The QVLD pin that indicates read data valid from the QDR II+ SRAM or RLDRAM II device, is included in this number.
11. RZQ pins are supported by Arria V, Arria 10, Cyclone V, and Stratix V devices.
12.Numbers are based on 2-GB discrete device with alert flag and address and command parity pins included.
13.DDR4 x16 devices support only a bank group of 1.
14.Numbers are based on a 576-MB device.
15.These numbers include K and K# clock pins. The CQ and CQ# clock pins are calculated in a separate column.
16.These numbers include K, K#, C, and C# clock pins. The CQ and CQ# clock pins are calculated in a separate column.
17.These numbers include CK, CK#, DK, and DK# clock pins. QK and QK# clock pins are calculated in a separate column.
18.This number is based on a 36864-kilobit device.
19.For DDR,DDR2,DDR3,LPDDR2, LPDDR3 SDRAM, RLDRAM 3, RLDRAM II, they are DM pins. For QDR II/II+/Extreme,
they are BWS pins. For DDR4, they are DM/DBI pins. For QDR IV, they are DINVA[1:0], DINVB[1:0], and AINV.
20.This number is based on a 144-Mbit device with address bus inversion and data bus inversion bits included.
Note:
Maximum interface width varies from device to device depending on the number of
I/O pins and DQS or DQ groups available. Achievable interface width also depends on
the number of address and command pins that the design requires. To ensure
adequate PLL, clock, and device routing resources are available, you should always
test fit any IP in the Quartus® Prime software before PCB sign-off.
Intel devices do not limit the width of external memory interfaces beyond the
following requirements:
•
Maximum possible interface width in any particular device is limited by the
number of DQS groups available.
•
Sufficient clock networks are available to the interface PLL as required by the IP.
•
Sufficient spare pins exist within the chosen bank or side of the device to include
all other address and command, and clock pin placement requirements.
•
The greater the number of banks, the greater the skew, hence Intel recommends
that you always generate a test project of your desired configuration and confirm
that it meets timing.
External Memory Interface Handbook Volume 2: Design Guidelines
11
1 Planning Pin and FPGA Resources
1.1.1 Estimating Pin Requirements
You should use the Quartus Prime software for final pin fitting. However, you can
estimate whether you have enough pins for your memory interface using the EMIF
Device Selector (for Arria 10 and Stratix 10 devices) on www.altera.com, or by the
following steps:
1.
Find out how many read data pins are associated per read data strobe or clock
pair, to determine which column of the DQS and DQ group availability (×4, ×8/×9,
×16/×18, or ×32/×36) refer to the pin table.
2.
Check the device density and package offering information to see if you can
implement the interface in one I/O bank or on one side or on two adjacent sides.
Note: If you target Arria II GX devices and you do not have enough I/O pins to
have the memory interface on one side of the device, you may place them
on the other side of the device. Arria II GX devices allow a memory
interface to span across the top and bottom, or left and right sides of the
device. For any interface that spans across two different sides, use the
wraparound interface performance.
3. Calculate the number of other memory interface pins needed, including any other
clocks (write clock or memory system clock), address, command, RUP, RDN, RZQ,
and any other pins to be connected to the memory components. Ensure you have
enough pins to implement the interface in one I/O bank or one side or on two
adjacent sides.
Note: a.
The DQS groups in Arria II GX devices reside on I/O modules, each
consisting of 16 I/O pins. You can only use a maximum of 12 pins per
I/O modules when the pins are used as DQS or DQ pins or HSTL/SSTL
output or HSTL/SSTL bidirectional pins. When counting the number of
available pins for the rest of your memory interface, ensure you do not
count the leftover four pins per I/O modules used for DQS, DQ, address
and command pins. The leftover four pins can be used as input pins
only.
b.
Refer to the device pin-out tables and look for the blank space in the
relevant DQS group column to identify the four pins that cannot be used
in an I/O module for Arria II GX devices.
c.
If you enable Ping Pong PHY, the IP core exposes two independent
Avalon interfaces to user logic, and a single external memory interface
of double the width for the data bus and the CS#, CKE, ODT, and
CK/CK# signals. The rest remain as if in single interface configuration.
You should test the proposed pin-outs with the rest of your design in the Quartus
Prime software (with the correct I/O standard and OCT connections) before finalizing
the pin-outs. There can be interactions between modules that are illegal in the
Quartus Prime software that you might not know about unless you compile the design
and use the Quartus Prime Pin Planner.
Related Links
External Memory Interface Device Selector
External Memory Interface Handbook Volume 2: Design Guidelines
12
1 Planning Pin and FPGA Resources
1.1.2 DDR, DDR2, DDR3, and DDR4 SDRAM Clock Signals
DDR, DDR2, DDR3, and DDR4 SDRAM devices use CK and CK# signals to clock the
address and command signals into the memory. Furthermore, the memory uses these
clock signals to generate the DQS signal during a read through the DLL inside the
memory. The SDRAM data sheet specifies the following timings:
•
tDQSCK is the skew between the CK or CK# signals and the SDRAM-generated DQS
signal
•
tDSH is the DQS falling edge from CK rising edge hold time
•
tDSS is the DQS falling edge from CK rising edge setup time
•
tDQSS is the positive DQS latching edge to CK rising edge
SDRAM have a write requirement (tDQSS) that states the positive edge of the DQS
signal on writes must be within ± 25% (± 90°) of the positive edge of the SDRAM
clock input. Therefore, you should generate the CK and CK# signals using the DDR
registers in the IOE to match with the DQS signal and reduce any variations across
process, voltage, and temperature. The positive edge of the SDRAM clock, CK, is
aligned with the DQS write to satisfy tDQSS.
DDR3 SDRAM can use a daisy-chained control address command (CAC) topology, in
which the memory clock must arrive at each chip at a different time. To compensate
for the flight-time skew between devices when using the CAC topology, you should
employ write leveling.
1.1.3 DDR, DDR2, DDR3, and DDR4 SDRAM Command and Address
Signals
Command and address signals in SDRAM devices are clocked into the memory device
using the CK or CK# signal. These pins operate at single data rate (SDR) using only
one clock edge. The number of address pins depends on the SDRAM device capacity.
The address pins are multiplexed, so two clock cycles are required to send the row,
column, and bank address.
For DDR, DDR2, and DDR3, the CS#, RAS#, CAS#, WE#, CKE, and ODT pins are SDRAM
command and control pins. For DDR3 SDRAM, certain topologies such as RDIMM and
LRDIMM include RESET#, PAR_IN (1.5V LVCMOS I/O standard), and ERR_OUT#
(SSTL-15 I/O standard).
The DDR2 SDRAM command and address inputs do not have a symmetrical setup and
hold time requirement with respect to the SDRAM clocks, CK, and CK#.
Although DDR4 operates in fundamentally the same way as other SDRAM, there are
no longer dedicated pins for RAS#, CAS#, and WE#, as those are now shared with
higher-order address pins. DDR4 still has CS#, CKE, ODT, and RESET# pins, similar to
DDR3. DDR4 introduces some additional pins, including the ACT# (activate) pin and
BG (bank group) pins. Depending on the memory format and the functions enabled,
the following pins might also exist in DDR4: PAR (address command parity) pin and
the ALERT# pin.
For Intel SDRAM high-performance controllers in Stratix III and Stratix IV devices, the
command and address clock is a dedicated PLL clock output whose phase can be
adjusted to meet the setup and hold requirements of the memory clock. The
command and address clock is also typically half-rate, although a full-rate
External Memory Interface Handbook Volume 2: Design Guidelines
13
1 Planning Pin and FPGA Resources
implementation can also be created. The command and address pins use the DDIO
output circuitry to launch commands from either the rising or falling edges of the
clock. The chip select CS#, clock enable CKE, and ODT pins are only enabled for one
memory clock cycle and can be launched from either the rising or falling edge of the
command and address clock signal. The address and other command pins are enabled
for two memory clock cycles and can also be launched from either the rising or falling
edge of the command and address clock signal.
In Arria II GX devices, the command and address clock is either shared with the
write_clk_2x or the mem_clk_2x clock.
1.1.4 DDR, DDR2, DDR3, and DDR4 SDRAM Data, Data Strobes, DM/DBI,
and Optional ECC Signals
DDR SDRAM uses bidirectional single-ended data strobe (DQS); DDR3 and DDR4
SDRAM use bidirectional differential data strobes. The DQSn pins in DDR2 SDRAM
devices are optional but recommended for DDR2 SDRAM designs operating at more
than 333 MHz. Differential DQS operation enables improved system timing due to
reduced crosstalk and less simultaneous switching noise on the strobe output drivers.
The DQ pins are also bidirectional.
Regardless of interface width, DDR SDRAM always operates in ×8 mode DQS groups.
DQ pins in DDR2, DDR3, and DDR4 SDRAM interfaces can operate in either ×4 or ×8
mode DQS groups, depending on your chosen memory device or DIMM, regardless of
interface width. The ×4 and ×8 configurations use one pair of bidirectional data strobe
signals, DQS and DQSn, to capture input data. However, two pairs of data strobes,
UDQS and UDQS# (upper byte) and LDQS and LDQS# (lower byte), are required by
the ×16 configuration devices. A group of DQ pins must remain associated with its
respective DQS and DQSn pins.
The DQ signals are edge-aligned with the DQS signal during a read from the memory
and are center-aligned with the DQS signal during a write to the memory. The
memory controller shifts the DQ signals by –90 degrees during a write operation to
center align the DQ and DQS signals. The PHY IP delays the DQS signal during a read,
so that the DQ and DQS signals are center aligned at the capture register. Intel
devices use a phase-locked loop (PLL) to center-align the DQS signal with respect to
the DQ signals during writes and Intel devices use dedicated DQS phase-shift circuitry
to shift the incoming DQS signal during reads. The following figure shows an example
where the DQS signal is shifted by 90 degrees for a read from the DDR2 SDRAM.
Figure 1.
Edge-aligned DQ and DQS Relationship During a DDR2 SDRAM Read in Burstof-Four Mode
DQS at
FPGA Pin
DQ at
FPGA Pin
DQS at DQ
IOE registers
DQ at DQ
IOE registers
Preamble
DQS phase shift
External Memory Interface Handbook Volume 2: Design Guidelines
14
Postamble
1 Planning Pin and FPGA Resources
The following figure shows an example of the relationship between the data and data
strobe during a burst-of-four write.
Figure 2.
DQ and DQS Relationship During a DDR2 SDRAM Write in Burst-of-Four Mode
DQS at
FPGA Pin
DQ at
FPGA Pin
The memory device's setup (tDS) and hold times (tDH) for the DQ and DM pins during
writes are relative to the edges of DQS write signals and not the CK or CK# clock.
Setup and hold requirements are not necessarily balanced inDDR2 and DDR3 SDRAM,
unlike in DDR SDRAM devices.
The DQS signal is generated on the positive edge of the system clock to meet the
tDQSS requirement. DQ and DM signals use a clock shifted –90 degrees from the
system clock, so that the DQS edges are centered on the DQ or DM signals when they
arrive at the DDR2 SDRAM. The DQS, DQ, and DM board trace lengths need to be
tightly matched (within 20 ps).
The SDRAM uses the DM pins during a write operation. Driving the DM pins low shows
that the write is valid. The memory masks the DQ signals if the DM pins are driven
high. To generate the DM signal, Intel recommends that you use the spare DQ pin
within the same DQS group as the respective data, to minimize skew.
The DM signal's timing requirements at the SDRAM input are identical to those for DQ
data. The DDR registers, clocked by the –90 degree shifted clock, create the DM
signals.
DDR4 supports DM similarly to other SDRAM, except that in DDR4 DM is active LOW
and bidirectional, because it supports Data Bus Inversion (DBI) through the same pin.
DM is multiplexed with DBI by a Mode Register setting whereby only one function can
be enabled at a time. DBI is an input/output identifying whether to store/output the
true or inverted data. When enabled, if DBI is LOW, during a write operation the data
is inverted and stored inside the DDR4 SDRAM; during a read operation, the data is
inverted and output. The data is not inverted if DBI is HIGH. For Arria 10, the DBI (for
DDR4) and the DM (for DDR3) pins in each DQS group must be paired with a DQ pin
for proper operation.
Some SDRAM modules support error correction coding (ECC) to allow the controller to
detect and automatically correct error in data transmission. The 72-bit SDRAM
modules contain eight extra data pins in addition to 64 data pins. The eight extra ECC
pins should be connected to a single DQS or DQ group on the FPGA.
1.1.5 DDR, DDR2, DDR3, and DDR4 SDRAM DIMM Options
Unbuffered DIMMs (UDIMMs) require one set of chip-select (CS#), on-die termination
(ODT), clock-enable (CKE), and clock pair (CK/CKn) for every physical rank on the
DIMM. Registered DIMMs use only one pair of clocks. DDR3 registered DIMMs require
a minimum of two chip-select signals, while DDR4 requires only one.
External Memory Interface Handbook Volume 2: Design Guidelines
15
1 Planning Pin and FPGA Resources
Compared to the unbuffered DIMMs (UDIMM), registered and load-reduced DIMMs
(RDIMMs and LRDIMMs, respectively) use at least two chip-select signals CS#[1:0] in
DDR3 and DDR4. Both RDIMMs and LRDIMMs require an additional parity signal for
address, RAS#, CAS#, and WE# signals. A parity error signal is asserted by the module
whenever a parity error is detected.
LRDIMMs expand on the operation of RDIMMs by buffering the DQ/DQS bus. Only one
electrical load is presented to the controller regardless of the number of ranks,
therefore only one clock enable (CKE) and ODT signal are required for LRDIMMs,
regardless of the number of physical ranks. Because the number of physical ranks
may exceed the number of physical chip-select signals, DDR3 LRDIMMs provide a
feature known as rank multiplication, which aggregates two or four physical ranks into
one larger logical rank. Refer to LRDIMM buffer documentation for details on rank
multiplication.
The following table shows UDIMM and RDIMM pin options for DDR, DDR2, and DDR3.
Table 2.
Pins
Data
UDIMM and RDIMM Pin Options for DDR, DDR2, and DDR3
UDIMM Pins (Single
Rank)
UDIMM Pins
(Dual Rank)
RDIMM Pins (Single
Rank)
RDIMM Pins
(Dual Rank)
72 bit DQ[71:0] =
72 bit DQ[71:0] =
72 bit DQ[71:0] =
72 bit DQ[71:0]=
{CB[7:0], DQ[63:0]}
{CB[7:0], DQ[63:0]}
{CB[7:0], DQ[63:0]}
{CB[7:0], DQ[63:0]}
DM[8:0]
DM[8:0]
DM[8:0]
DM[8:0]
DQS[8:0] and
DQS#[8:0]
DQS[8:0] and
DQS#[8:0]
DQS[8:0] and
DQS#[8:0]
DQS[8:0] and
DQS#[8:0]
BA[2:0], A[15:0]–
BA[2:0], A[15:0]–
2 GB: A[13:0]
2 GB: A[13:0]
BA[2:0], A[15:0]–
2 GB: A[13:0]
4 GB: A[14:0]
8 GB: A[15:0]
Data Mask
Data Strobe
(1)
Address
4 GB: A[14:0]
4 GB: A[14:0]
8 GB: A[15:0]
8 GB: A[15:0]
BA[2:0], A[15:0]–
2 GB: A[13:0]
4 GB: A[14:0]
8 GB: A[15:0]
Clock
CK0/CK0#
CK0/CK0#, CK1/CK1#
CK0/CK0#
CK0/CK0#
Command
ODT, CS#, CKE, RAS#,
CAS#, WE#
ODT[1:0], CS#[1:0],
CKE[1:0], RAS#,
CAS#, WE#
ODT, CS#[1:0], CKE,
RAS#, CAS#, WE# 2
ODT[1:0], CS#[1:0],
CKE[1:0], RAS#,
CAS#, WE#
PAR_IN, ERR_OUT
PAR_IN, ERR_OUT
SA[2:0], SDA, SCL,
EVENT#, RESET#
SA[2:0], SDA, SCL,
EVENT#, RESET#
Parity
Other Pins
—
SA[2:0], SDA, SCL,
EVENT#, RESET#
—
SA[2:0], SDA, SCL,
EVENT#, RESET#
Note to Table:
1. DQS#[8:0] is optional in DDR2 SDRAM and is not supported in DDR SDRAM interfaces.
2. For single rank DDR2 RDIMM, ignore CS#[1] because it is not used.
The following table shows LRDIMM pin options for DDR3.
External Memory Interface Handbook Volume 2: Design Guidelines
16
1 Planning Pin and FPGA Resources
Table 3.
LRDIMM Pin Options for DDR3
Pins
Data
LRDIMM
Pins (x4,
2R)
LRDIMM
(x4, 4R,
RMF=1) 3
72 bit DQ
[71:0]=
72 bit DQ
[71:0]=
{CB [7:0],
{CB [7:0],
DQ
[63:0]}
DQ
[63:0]}
Data Mask
LRDIMM
Pins (x4,
4R,
RMF=2)
72 bit DQ
[71:0]=
{CB [7:0],
DQ
[63:0]}
LRDIMM
Pins (x4,
8R,
RMF=2)
LRDIMM
Pins (x4,
8R,
RMF=4)
LRDIMM
(x8, 4R,
RMF=1) 3
LRDIMM
Pins (x8,
4R,
RMF=2)
72 bit DQ
[71:0]=
72 bit DQ
[71:0]=
{CB [7:0],
{CB [7:0],
DQ
[63:0]}
72 bit DQ
[71:0]=
{CB [7:0],
DQ
[63:0]}
DM[8:0]
DM[8:0]
DQ
[63:0]}
72 bit DQ
[71:0]=
{CB [7:0],
DQ
[63:0]}
—
—
—
—
—
DQS[17:0]
DQS[17:0]
DQS[17:0]
DQS[17:0]
DQS[17:0]
DQS[8:0]
DQS[8:0]
and
and
and
and
and
and
and
DQS#[17:0
]
DQS#[17:0
]
DQS#[17:0
]
DQS#[17:0
]
DQS#[17:0
]
DQS#[8:0]
DQS#[8:0]
Address
BA[2:0],
A[15:0]
-2GB:A[13
:0]
4GB:A[14:
0]
8GB:A[15:
0]
BA[2:0],
A[15:0]
-2GB:A[13
:0]
4GB:A[14:
0]
8GB:A[15:
0]
BA[2:0],
A[16:0]
-4GB:A[14
:0]
8GB:A[15:
0]
16GB:A[16
:0]
BA[2:0],
A[16:0]
-4GB:A[14
:0]
8GB:A[15:
0]
16GB:A[16
:0]
BA[2:0],
A[17:0]
-16GB:A[1
5:0]
32GB:A[16
:0]
64GB:A[17
:0]
BA[2:0],
A[15:0]
-2GB:A[13
:0]
4GB:A[14:
0]
8GB:A[15:
0]
BA[2:0],
A[16:0]
-4GB:A[14
:0]
8GB:A[15:
0]
16GB:A[16
:0]
Clock
CK0/CK0#
CK0/CK0#
CK0/CK0#
CK0/CK0#
CK0/CK0#
CK0/CK0#
CK0/CK0#
Command
ODT,
CS[1:0]#,
CKE,
RAS#,
CAS#, WE#
ODT,
CS[3:0]#,
CKE,
RAS#,
CAS#, WE#
ODT,
CS[2:0]#,
CKE,
RAS#,
CAS#, WE#
ODT,
CS[3:0]#,
CKE,
RAS#,
CAS#, WE#
ODT,
CS[3:0]#,
CKE,
RAS#,
CAS#, WE#
ODT,
CS[3:0]#,
CKE,
RAS#,
CAS#, WE#
ODT,
CS[2:0]#,
CKE,
RAS#,
CAS#, WE#
Parity
PAR_IN,
ERR_OUT
PAR_IN,
ERR_OUT
PAR_IN,
ERR_OUT
PAR_IN,
ERR_OUT
PAR_IN,
ERR_OUT
PAR_IN,
ERR_OUT
PAR_IN,
ERR_OUT
Other Pins
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
Data Strobe
Notes to Table:
1. DM pins are not used for LRDIMMs that are constructed using ×4 components.
2. S#[2] is treated as A[16] (whose corresponding pins are labeled as CS#[2] or RM[0]) and S#[3] is treated as A[17]
(whose corresponding pins are labeled as CS#[3] or RM[1]) for certain rank multiplication configuration.
3. R = rank, RMF = rank multiplication factor.
The following table shows UDIMM, RDIMM, and LRDIMM pin options for DDR4.
Table 4.
Pins
Data
UDIMM, RDIMM, and LRDIMM Pin Options for DDR4
UDIMM Pins
(Single Rank)
UDIMM Pins
(Dual Rank)
RDIMM Pins
(Single Rank)
RDIMM Pins
(Dual Rank)
LRDIMM Pins
(Dual Rank)
LRDIMM Pins
(Quad Rank)
72 bit
72 bit
72 bit
72 bit
72 bit
72 bit
DQ[71:0]=
{CB[7:0],
DQ[71:0]=
{CB[7:0],
DQ[71:0]=
{CB[7:0],
DQ[71:0]=
{CB[7:0],
DQ[71:0]=
{CB[7:0],
DQ[71:0]=
{CB[7:0],
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
17
1 Planning Pin and FPGA Resources
Pins
UDIMM Pins
(Single Rank)
DQ[63:0]}
Data Mask
DM#/
DBI#[8:0]
Data Strobe
(1)
x8:
DQS[8:0] and
DQS#[8:0]
Address
BA[1:0],
BG[1:0],
A[16:0] 4GB:
UDIMM Pins
(Dual Rank)
RDIMM Pins
(Single Rank)
DQ[63:0]}
DQ[63:0]}
DQ[63:0]}
DM#/
DBI#[8:0](1)
DM#/
DBI#[8:0](1)
DM#/
DBI#[8:0](1)
x8:
DQS[8:0] and
DQS#[8:0]
x8:
DQS[8:0] and
DQS#[8:0]
x4:
DQS[17:0]
x8:
DQS[8:0] and
DQS#[8:0]
x4:
DQS[17:0]
and
and
DQS#[17:0]
DQS#[17:0]
BA[1:0],
BG[1:0],
A[16:0] 8GB: A[14:0]
BA[1:0],
BG[1:0], x8:
A[16:0] -
16GB:
4GB:
A[14:0]
8GB: A[15:0]
A[15:0]
A[14:0]
8GB: A[15:0]
16GB:
32GB:
16GB:
A[16:0]
(2)
RDIMM Pins
(Dual Rank)
A[16:0]
(2)
A[16:0]
(2)
32GB:
A[17:0]
LRDIMM Pins
(Quad Rank)
DQ[63:0]}
DQ[63:0]}
—
—
x4:
DQS[17:0]
x4:
DQS[17:0]
and
and
DQS#[17:0]
DQS#[17:0]
BA[1:0],
BG[1:0],x8:
A[16:0] x4:
A[17:0] 8GB: A[14:0]
BA[1:0],
BG[1:0],
A[17:0] -
BA[1:0],
BG[1:0],
A[17:0] -
16GB:
32GB:
A[15:0]
A[16:0]
32GB:
64GB:
A[16:0]
(3)
LRDIMM Pins
(Dual Rank)
64GB:
A[17:0]
(2)
16GB:
32GB:
A[15:0]
A[15:0]
(2)
A[17:0]
(3)
64GB:
A[16:0]
128GB:
A[17:0]
(2)
(3)
(3)
Clock
CK0/CK0#
CK0/CK0#,
CK1/CK1#
CK0/CK0#
CK0/CK0#
CK0/CK0#
CK0/CK0#
Command
ODT, CS#,
CKE, ACT#,
RAS#/A16,
CAS#/A15,
WE#/A14
ODT[1:0],
CS#[1:0],
CKE[1:0],
ACT#, RAS#/
A16, CAS#/
A15,
WE#/A14
ODT, CS#,
CKE, ACT#,
RAS#/A16,
CAS#/A15,
WE#/A14
ODT[1:0],
CS#[1:0],
CKE, ACT#,
RAS#/A16,
CAS#/A15,
WE#/A14
ODT,
CS#[1:0],
CKE, ACT#,
RAS#/A16,
CAS#/A15,
WE#/A14
ODT,
CS#[3:0],
CKE, ACT#,
RAS#/A16,
CAS#/A15,
WE#/A14
Parity
PAR, ALERT#
PAR, ALERT#
PAR, ALERT#
PAR, ALERT#
PAR, ALERT#
PAR, ALERT#
Other Pins
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
SA[2:0],
SDA, SCL,
EVENT#,
RESET#
Notes to Table:
1. DM/DBI pins are available only for DIMMs constructed using x8 or greater components.
2. This density requires 4Gb x4 or 2Gb x8 DRAM components.
3. This density requires 8Gb x4 DRAM components.
4. This table assumes a single slot configuration. The Arria 10 memory controller can support up to 4 ranks per channel. A
single slot interface may have up to 4 ranks, and a dual slot interface may have up to 2 ranks per slot. In either cse,
the total number of ranks, calculated as the number of slots multipled by the number of ranks per slot, must be less
than or equal to 4.
1.1.6 QDR II, QDR II+, and QDR II+ Xtreme SRAM Clock Signals
QDR II, QDR II+ and QDR II+ Xtreme SRAM devices have two pairs of clocks, listed
below.
•
Input clocks K and K#
•
Echo clocks CQ and CQ#
External Memory Interface Handbook Volume 2: Design Guidelines
18
1 Planning Pin and FPGA Resources
In addition, QDR II devices have a third pair of input clocks, C and C#.
The positive input clock, K, is the logical complement of the negative input clock, K#.
Similarly, C and CQ are complements of C# and CQ#, respectively. With these
complementary clocks, the rising edges of each clock leg latch the DDR data.
The QDR II SRAM devices use the K and K# clocks for write access and the C and C#
clocks for read accesses only when interfacing more than one QDR II SRAM device.
Because the number of loads that the K and K# clocks drive affects the switching
times of these outputs when a controller drives a single QDR II SRAM device, C and
C# are unnecessary. This is because the propagation delays from the controller to the
QDR II SRAM device and back are the same. Therefore, to reduce the number of loads
on the clock traces, QDR II SRAM devices have a single-clock mode, and the K and K#
clocks are used for both reads and writes. In this mode, the C and C# clocks are tied
to the supply voltage (VDD). Intel FPGA external memory IP supports only single-clock
mode.
For QDR II, QDR II+, or QDR II+ Xtreme SRAM devices, the rising edge of K is used to
capture synchronous inputs to the device and to drive out data through Q[x:0], in
similar fashion to QDR II SRAM devices in single clock mode. All accesses are initiated
on the rising edge of K .
CQ and CQ# are the source-synchronous output clocks from the QDR II, QDR II+, or
QDR II+ Xtreme SRAM device that accompanies the read data.
The Intel device outputs the K and K# clocks, data, address, and command lines to the
QDR II, QDR II+, or QDR II+ Xtreme SRAM device. For the controller to operate
properly, the write data (D), address (A), and control signal trace lengths (and
therefore the propagation times) should be equal to the K and K# clock trace lengths.
You can generate K and K# clocks using any of the PLL registers via the DDR registers.
Because of strict skew requirements between K and K# signals, use adjacent pins to
generate the clock pair. The propagation delays for K and K# from the FPGA to the
QDR II, QDR II+, or QDR II+ Xtreme SRAM device are equal to the delays on the data
and address (D, A) signals. Therefore, the signal skew effect on the write and read
request operations is minimized by using identical DDR output circuits to generate
clock and data inputs to the memory.
1.1.7 QDR II, QDR II+ and QDR II+ Xtreme SRAM Command Signals
QDR II, QDR II+ and QDR II+ Xtreme SRAM devices use the write port select (WPS#)
signal to control write operations and the read port select (RPS#) signal to control
read operations.
1.1.8 QDR II, QDR II+ and QDR II+ Xtreme SRAM Address Signals
QDR II, QDR II+ and QDR II+ Xtreme SRAM devices use one address bus (A) for both
read and write accesses.
External Memory Interface Handbook Volume 2: Design Guidelines
19
1 Planning Pin and FPGA Resources
1.1.9 QDR II, QDR II+ and QDR II+ Xtreme SRAM Data, BWS, and QVLD
Signals
QDR II, QDR II+ and QDR II+ Xtreme SRAM devices use two unidirectional data
buses: one for writes (D) and one for reads (Q).
At the pin, the read data is edge-aligned with the CQ and CQ# clocks while the write
data is center-aligned with the K and K# clocks (see the following figures).
Figure 3.
Edge-aligned CQ and Q Relationship During QDR II+ SRAM Read
CQ at
FPGA Pin
CQ# at
FPGA Pin
Q at
FPGA Pin
CQ at
Capture Register
CQ# at
Capture Register
Q at
Capture Register
DQS phase
shift
Figure 4.
Center-aligned K and D Relationship During QDR II+ SRAM Write
K at
FPGA Pin
K# at
FPGA Pin
D at
FPGA Pin
The byte write select signal (BWS#) indicates which byte to write into the memory
device.
QDR II+ and QDR II+ Xtreme SRAM devices also have a QVLD pin that indicates valid
read data. The QVLD signal is edge-aligned with the echo clock and is asserted high
for approximately half a clock cycle before data is output from memory.
Note:
The Intel FPGA external memory interface IP does not use the QVLD signal.
1.1.10 QDR IV SRAM Clock Signals
QDR IV SRAM devices have three pairs of differential clocks.
External Memory Interface Handbook Volume 2: Design Guidelines
20
1 Planning Pin and FPGA Resources
The three QDR IV differential clocks are as follows:
•
Address and Command Input Clocks CK and CK#
•
Data Input Clocks DKx and DKx#, where x can be A or B, referring to the
respective ports
•
Data Output Clocks, QKx and QKx#, where x can be A or B, referring to the
respective ports
QDR IV SRAM devices have two independent bidirectional data ports, Port A and Port
B, to support concurrent read/write transactions on both ports. These data ports are
controlled by a common address port clocked by CK and CK# in double data rate.
There is one pair of CK and CK# pins per QDR IV SRAM device.
DKx and DKx# samples the DQx inputs on both rising and falling edges. Similarly, QKx
and QKx# samples the DQx outputs on both rising and falling edges.
QDR IV SRAM devices employ two sets of free running differential clocks to
accompany the data. The DKx and DKx# clocks are the differential input data clocks
used during writes. The QKx and QKx# clocks are the output data clocks used during
reads. Each pair of DKx and DKx#, or QKx and QKx# clocks are associated with either
9 or 18 data bits.
The polarity of the QKB and QKB# pins in the Intel FPGA external memory interface IP
was swapped with respect to the polarity of the differential input buffer on the FPGA.
In other words, the QKB pins on the memory side must be connected to the negative
pins of the input buffers on the FPGA side, and the QKB# pins on the memory side
must be connected to the positive pins of the input buffers on the FPGA side. Notice
that the port names at the top-level of the IP already reflect this swap (that is,
mem_qkb is assigned to the negative buffer leg, and mem_qkb_n is assigned to the
positive buffer leg).
QDR IV SRAM devices are available in x18 and x36 bus width configurations. The
exact clock-data relationships are as follows:
•
For ×18 data bus width configuration, there are 9 data bits associated with each
pair of write and read clocks. So, there are two pairs of DKx and DKx# pins and
two pairs of QKx or QKx# pins.
•
For ×36 data bus width configuration, there are 18 data bits associated with each
pair of write and read clocks. So, there are two pairs of DKx and DKx# pins and
two pairs of QKx or QKx# pins.
There are tCKDK timing requirements for skew between CK and DKx or CK# and
DKx# .Similarly, there are tCKQK timing requirements for skew between CK and QKx
or CK# and QKx# .
1.1.11 QDR IV SRAM Commands and Addresses, AP, and AINV Signals
The CK and CK# signals clock the commands and addresses into the memory devices.
There is one pair of CK and CK# pins per QDR IV SRAM device. These pins operate at
double data rate using both rising and falling edge. The rising edge of CK latches the
addresses for port A, while the falling edge of CK latches the addresses inputs for port
B.
External Memory Interface Handbook Volume 2: Design Guidelines
21
1 Planning Pin and FPGA Resources
QDR IV SRAM devices have the ability to invert all address pins to reduce potential
simultaneous switching noise. Such inversion is accomplished using the Address
Inversion Pin for Address and Address Parity Inputs (AINV), which
assumes an address parity of 0, and indicates whether the address bus and address
parity are inverted.
The above features are available as Option Control under Configuration Register
Settings in Arria 10 EMIF IP. The commands and addresses must meet the memory
address and command setup (tAS, tCS) and hold (tAH, tCH) time requirements.
1.1.12 QDR IV SRAM Data, DINV, and QVLD Signals
The read data is edge-aligned with the QKA or QKB# clocks while the write data is
center-aligned with the DKA and DKB# clocks.
QK is shifted by the DLL so that the clock edges can be used to clock in the DQ at the
capture register.
Figure 5.
Edge-Aligned DQ and QK Relationship During Read
QK at FPGA Pin
DQ at FPGA Pin
QK at Capture
Register
DQ at Capture
Register
Figure 6.
Center-Aligned DQ and DK Relationship During Write
DK at FPGA Pin
DQ at FPGA Pin
The polarity of the QKB and QKB# pins in the Intel FPGA external memory interface IP
was swapped with respect to the polarity of the differential input buffer on the FPGA.
In other words, the QKB pins on the memory side need to be connected to the
negative pins of the input buffers on the FPGA side, and the QKB# pins on the memory
side need to be connected to the positive pins of the input buffers on the FPGA side.
Notice that the port names at the top-level of the IP already reflect this swap (that is,
mem_qkb is assigned to the negative buffer leg, and mem_qkb_n is assigned to the
positive buffer leg).
External Memory Interface Handbook Volume 2: Design Guidelines
22
1 Planning Pin and FPGA Resources
The synchronous read/write input, RWx#, is used in conjunction with the synchronous
load input, LDx#, to indicate a Read or Write Operation. For port A, these signals are
sampled on the rising edge of CK clock, for port B, these signals are sampled on the
falling edge of CK clock.
QDR IV SRAM devices have the ability to invert all data pins to reduce potential
simultaneous switching noise, using the Data Inversion Pin for DQ Data Bus, DINVx.
This pin indicates whether DQx pins are inverted or not.
To enable the data pin inversion feature, click Configuration Register Settings ➤
Option Control in the Arria 10 or Stratix 10 EMIF IP.
QDR IV SRAM devices also have a QVLD pin which indicates valid read data. The QVLD
signal is edge-aligned with QKx or QKx# and is high approximately one-half clock cycle
before data is output from the memory.
Note:
The Intel ZFPGA external memory interface IP does not use the QVLD signal.
1.1.13 RLDRAM II and RLDRAM 3 Clock Signals
RLDRAM II and RLDRAM 3 devices use CK and CK# signals to clock the command and
address bus in single data rate (SDR). There is one pair of CK and CK# pins per
RLDRAM II or RLDRAM 3 device.
Instead of a strobe, RLDRAM II and RLDRAM 3 devices use two sets of free-running
differential clocks to accompany the data. The DK and DK# clocks are the differential
input data clocks used during writes while the QK or QK# clocks are the output data
clocks used during reads. Even though QK and QK# signals are not differential signals
according to the RLDRAM II and RLDRAM 3 data sheets, Micron treats these signals as
such for their testing and characterization. Each pair of DK and DK#, or QK and QK#
clocks are associated with either 9 or 18 data bits.
The exact clock-data relationships are as follows:
•
RLDRAM II: For ×36 data bus width configuration, there are 18 data bits
associated with each pair of write and read clocks. So, there are two pairs of DK
and DK# pins and two pairs of QK or QK# pins.
•
RLDRAM 3: For ×36 data bus width configuration, there are 18 data bits
associated with each pair of write clocks. There are 9 data bits associated with
each pair of read clocks. So, there are two pairs of DK and DK# pins and four pairs
of QK and QK# pins.
•
RLDRAM II: For ×18 data bus width configuration, there are 18 data bits per one
pair of write clocks and nine data bits per one pair of read clocks. So, there is one
pair of DK and DK# pins, but there are two pairs of QK and QK# pins.
•
RLDRAM 3: For ×18 data bus width configuration, there are 9 data bits per one
pair of write clocks and nine data bits per one pair of read clocks. So, there are
two pairs of DK and DK# pins, and two pairs of QK and QK# pins
•
RLDRAM II: For ×9 data bus width configuration, there are nine data bits
associated with each pair of write and read clocks. So, there is one pair of DK and
DK# pins and one pair of QK and QK# pins each.
•
RLDRAM 3: RLDRAM 3 does not have the ×9 data bus width configuration.
External Memory Interface Handbook Volume 2: Design Guidelines
23
1 Planning Pin and FPGA Resources
There are tCKDK timing requirements for skew between CK and DK or CK# and DK#.
For both RLDRAM II and RLDRAM 3, because of the loads on these I/O pins, the
maximum frequency you can achieve depends on the number of memory devices you
are connecting to the Intel device. Perform SPICE or IBIS simulations to analyze the
loading effects of the pin-pair on multiple RLDRAM II or RLDRAM 3 devices.
1.1.14 RLDRAM II and RLDRAM 3 Commands and Addresses
The CK and CK# signals clock the commands and addresses into the memory devices.
These pins operate at single data rate using only one clock edge. RLDRAM II and
RLDRAM 3 support both non-multiplexed and multiplexed addressing. Multiplexed
addressing allows you to save a few user I/O pins while non-multiplexed addressing
allows you to send the address signal within one clock cycle instead of two clock
cycles. CS#, REF#, and WE# pins are input commands to the RLDRAM II or RLDRAM 3
device.
The commands and addresses must meet the memory address and command setup
(tAS, tCS) and hold (tAH, tCH) time requirements.
Note:
The RLDRAM II and RLDRAM 3 external memory interface IP do not support
multiplexed addressing.
1.1.15 RLDRAM II and RLDRAM 3 Data, DM and QVLD Signals
The read data is edge-aligned with the QK or QK# clocks while the write data is
center-aligned with the DK and DK# clocks (see the following figures). The memory
controller shifts the DK and DK# signals to center align the DQ and DK or DK# signals
during a write. It also shifts the QK signal during a read, so that the read data (DQ
signals) and QK clock is center-aligned at the capture register.
Intel devices use dedicated DQS phase-shift circuitry to shift the incoming QK signal
during reads and use a PLL to center-align the DK and DK# signals with respect to the
DQ signals during writes.
Figure 7.
Edge-aligned DQ and QK Relationship During RLDRAM II or RLDRAM 3 Read
QK at
FPGA Pin
DQ at
FPGA Pin
QK at DQ
LE Registers
DQ at DQ
LE Registers
DQS Phase Shift
External Memory Interface Handbook Volume 2: Design Guidelines
24
1 Planning Pin and FPGA Resources
Figure 8.
Center-aligned DQ and DK Relationship During RLDRAM II or RLDRAM 3 Write
DK at
FPGA Pin
DQ at
FPGA Pin
For RLDRAM II and RLDRAM 3, data mask (DM) pins are used only during a write. The
memory controller drives the DM signal low when the write is valid and drives it high to
mask the DQ signals.
For RLDRAM II, there is one DM pin per memory device. The DQ input signal is masked
when the DM signal is high.
For RLDRAM 3, there are two DM pins per memory device. DM0 is used to mask the
lower byte for the x18 device and (DQ[8:0],DQ[26:18]) for the x36 device. DM1 is
used to mask the upper byte for the x18 device and (DQ[17:9], DQ[35:27]) for the
x36 device.
The DM timing requirements at the input to the memory device are identical to those
for DQ data. The DDR registers, clocked by the write clock, create the DM signals. This
reduces any skew between the DQ and DM signals.
The RLDRAM II or RLDRAM 3 device's setup time (tDS) and hold (tDH) time for the
write DQ and DM pins are relative to the edges of the DK or DK# clocks. The DK and
DK# signals are generated on the positive edge of system clock, so that the positive
edge of CK or CK# is aligned with the positive edge of DK or DK# respectively to meet
the tCKDK requirement. The DQ and DM signals are clocked using a shifted clock so
that the edges of DK or DK# are center-aligned with respect to the DQ and DM signals
when they arrive at the RLDRAM II or RLDRAM 3 device.
The clocks, data, and DM board trace lengths should be tightly matched to minimize
the skew in the arrival time of these signals.
RLDRAM II and RLDRAM 3 devices also have a QVLD pin indicating valid read data.
The QVLD signal is edge-aligned with QK or QK# and is high approximately half a clock
cycle before data is output from the memory.
Note:
The Intel FPGA external memory interface IP does not use the QVLD signal.
1.1.16 LPDDR2 and LPDDR3 Clock Signal
CK and CKn are differential clock inputs to the LPDDR2 and LPDDR3 interface. All the
double data rate (DDR) inputs are sampled on both the positive and negative edges of
the clock. Single data rate (SDR) inputs, CSn and CKE, are sampled at the positive
clock edge.
The clock is defined as the differential pair which consists of CK and CKn. The positive
clock edge is defined by the cross point of a rising CK and a falling CKn. The negative
clock edge is defined by the cross point of a falling CK and a rising CKn.
External Memory Interface Handbook Volume 2: Design Guidelines
25
1 Planning Pin and FPGA Resources
The SDRAM data sheet specifies timing data for the following:
•
tDSH is the DQS falling edge hold time from CK.
•
tDSS is the DQS falling edge to the CK setup time.
•
tDQSS is the Write command to the first DQS latching transition.
•
tDQSCK is the DQS output access time from CK/CKn.
1.1.17 LPDDR2 and LPDDR3 Command and Address Signal
All LPDDR2 and LPDDR3 devices use double data rate architecture on the command/
address bus to reduce the number of input pins in the system. The 10-bit command/
address bus contains command, address, and bank/row buffer information. Each
command uses one clock cycle, during which command information is transferred on
both the positive and negative edges of the clock.
1.1.18 LPDDR2 and LPDDR3 Data, Data Strobe, and DM Signals
LPDDR2 and LPDDR3 devices use bidirectional and differential data strobes.
Differential DQS operation enables improved system timing due to reduced crosstalk
and less simultaneous switching noise on the strobe output drivers. The DQ pins are
also bidirectional. DQS is edge-aligned with the read data and centered with the write
data.
DM is the input mask for the write data signal. Input data is masked when DM is
sampled high coincident with that input data during a write access.
1.1.19 Maximum Number of Interfaces
The maximum number of interfaces supported for a given memory protocol varies,
depending on the FPGA in use.
Unless otherwise noted, the calculation for the maximum number of interfaces is
based on independent interfaces where the address or command pins are not shared.
The maximum number of independent interfaces is limited to the number of PLLs each
FPGA device has.
Note:
You must share DLLs if the total number of interfaces exceeds the number of DLLs
available in a specific FPGA device. You may also need to share PLL clock outputs
depending on your clock network usage, refer to PLLs and Clock Networks.
Note:
For information about the number of DQ and DQS in other packages, refer to the DQ
and DQS tables in the relevant device handbook.
For interface information for Arria 10 and Stratix 10 devices, you can consult the EMIF
Device Selector on www.altera.com.
Timing closure depends on device resource and routing utilization. For more
information about timing closure, refer to the Area and Timing Optimization
Techniques chapter in the Quartus Prime Handbook.
External Memory Interface Handbook Volume 2: Design Guidelines
26
1 Planning Pin and FPGA Resources
Related Links
•
PLLs and Clock Networks on page 72
The exact number of clocks and PLLs required in your design depends greatly
on the memory interface frequency, and on the IP that your design uses.
•
Intel Arria 10 Device Handbook
•
External Memory Interface Device Selector
•
Quartus Prime Handbook
1.1.19.1 Maximum Number of DDR SDRAM Interfaces Supported per FPGA
The following table describes the maximum number of ×8 DDR SDRAM components
that can fit in the smallest and biggest devices and pin packages assuming the device
is blank.
Each interface of size n, where n is a multiple of 8, consists of:
Table 5.
•
n DQ pins (including error correction coding (ECC))
•
n/8 DM pins
•
n/8 DQS pins
•
18 address pins
•
6 command pins (CAS#, RAS#, WE#, CKE, and CS#)
•
1 CK, CK# pin pair for up to every three ×8 DDR SDRAM components
Maximum Number of DDR SDRAM Interfaces Supported per FPGA
Device
Arria II GX
Device Type
EP2AGX190
EP2AGX260
EP2AGX45
EP2AGX65
Arria II GZ
Package Pin Count
Maximum Number of Interfaces
1,152
Four ×8 interfaces or one ×72 interface on
each side (no DQ pins on left side)
358
EP2AGZ300
EP2AGZ350
EP2AGZ225
1,517
EP2AGZ300
EP2AGZ350
780
•
•
•
Four ×8 interfaces or one ×72 interface on
each side
•
•
•
Stratix III
EP3SL340
1,760
•
•
EP3SE50
484
•
•
Stratix IV
EP4SGX290
EP4SGX360
1,932
On top side, one ×16 interface
On bottom side, one ×16 interface
On right side (no DQ pins on left side),
one ×8 interface
•
On top side, three ×8 interfaces or one
×64 interface
On bottom side, three ×8 interfaces or
one ×64 interface
No DQ pins on the left and right sides
Two ×72 interfaces on both top and
bottom sides
One ×72 interface on both right and left
sides
Two ×8 interfaces on both top and
bottom sides
Three ×8 interface on both right and left
sides
One ×72 interface on each side
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
27
1 Planning Pin and FPGA Resources
Device
Device Type
Package Pin Count
Maximum Number of Interfaces
1,760
or
• One ×72 interface on each side and two
additional ×72 wraparound interfaces,
only if sharing DLL and PLL resources
EP4SGX530
EP4SE530
EP4SE820
EP4SGX70
EP4SGX110
EP4SGX180
EP4SGX230
780
•
Three ×8 interfaces or one ×64
interface on both top and bottom sides
On left side, one ×48 interface or two
×8 interfaces
No DQ pins on the right side
•
•
Related Links
External Memory Interface Device Selector
1.1.19.2 Maximum Number of DDR2 SDRAM Interfaces Supported per FPGA
The following table lists the maximum number of ×8 DDR2 SDRAM components that
can be fitted in the smallest and biggest devices and pin packages assuming the
device is blank.
Each interface of size n, where n is a multiple of 8, consists of:
Table 6.
Device
Arria II GX
•
n DQ pins (including ECC)
•
n/8 DM pins
•
n/8 DQS, DQSn pin pairs
•
18 address pins
•
7 command pins (CAS#, RAS#, WE#, CKE, ODT, and CS#)
•
1 CK, CK# pin pair up to every three ×8 DDR2 components
Maximum Number of DDR2 SDRAM Interfaces Supported per FPGA
Device Type
Package Pin Count
EP2AGX190
EP2AGX260
EP2AGX45
EP2AGX65
Arria II GZ
1,152
358
Maximum Number of Interfaces
Four ×8 interfaces or one ×72 interface
on each side (no DQ pins on left side)
•
•
EP2AGZ300
EP2AGZ350
EP2AGZ225
1,517
EP2AGZ300
EP2AGZ350
780
Four ×8 interfaces or one ×72 interface
on each side
•
•
Arria V
5AGXB1
5AGXB3
5AGXB5
5AGXB7
5AGTD3
5AGTD7
1,517
One ×16 interface on both top and
bottom sides
On right side (no DQ pins on left
side), one ×8 interface
•
•
Three ×8 interfaces or one ×64
interface on both top and bottom
sides
No DQ pins on the left and right sides
Two ×72 interfaces on both top and
bottom sides
No DQ pins on left and right sides
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
28
1 Planning Pin and FPGA Resources
Device
Device Type
5AGXA1
5AGXA3
Package Pin Count
672
Maximum Number of Interfaces
•
•
•
5AGXA5
5AGXA7
672
•
•
Arria V GZ
5AGZE5
5AGZE7
1,517
5AGZE1
5AGZE3
780
•
•
•
•
•
Cyclone V
MAX 10 FPGA
Stratix III
Three ×72 interfaces on both top and
bottom sides
No DQ pins on left and right sides
On top side, two ×8 interfaces
On bottom side, four ×8 interfaces or
one ×72 interface
No DQ pins on left and right sides
1,152
5CEA7
5CGTD7
5CGXC7
484
10M50D672
10M40D672
762
One x32 interface on the right side
10M50D256
10M40D256
10M25D256
10M16D256
256
One x8 interface on the right side
•
•
•
•
1,760
•
•
EP3SE50
484
•
•
Stratix IV
One ×56 interface or two x24
interfaces on both top and bottom
sides
No DQ pins on the left side
5CGTD9
5CEA9
5CGXC9
EP3SL340
•
One ×56 interface or two x24
interfaces on both top and bottom
sides
One ×32 interface on the right side
No DQ pins on the left side
EP4SGX290
EP4SGX360
EP4SGX530
1,932
EP4SE530
EP4SE820
1,760
One ×72 interface or two ×32
interfaces on each of the top, bottom,
and right sides
No DQ pins on the left side
One ×48 interface or two ×16
interfaces on both top and bottom
sides
One x8 interface on the right side
No DQ pins on the left side
Two ×72 interfaces on both top and
bottom sides
One ×72 interface on both right and
left sides
Two ×8 interfaces on both top and
bottom sides
Three ×8 interfaces on both right and
left sides
• One ×72 interface on each side
or
• One ×72 interface on each side and
two additional ×72 wraparound
interfaces only if sharing DLL and PLL
resources
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
29
1 Planning Pin and FPGA Resources
Device
Device Type
Package Pin Count
EP4SGX70
EP4SGX110
EP4SGX180
EP4SGX230
Stratix V
Maximum Number of Interfaces
780
•
•
•
5SGXA5
5SGXA7
1,932
5SGXA3
5SGXA4
780
•
•
•
•
•
Three ×8 interfaces or one ×64
interface on top and bottom sides
On left side, one ×48 interface or two
×8 interfaces
No DQ pins on the right side
Three ×72 interfaces on both top and
bottom sides
No DQ pins on left and right sides
On top side, two ×8 interfaces
On bottom side, four ×8 interfaces or
one ×72 interface
No DQ pins on left and right sides
Related Links
External Memory Interface Device Selector
1.1.19.3 Maximum Number of DDR3 SDRAM Interfaces Supported per FPGA
The following table lists the maximum number of ×8 DDR3 SDRAM components that
can be fitted in the smallest and biggest devices and pin packages assuming the
device is blank.
Each interface of size n, where n is a multiple of 8, consists of:
Table 7.
Device
Arria II GX
•
n DQ pins (including ECC)
•
n/8 DM pins
•
n/8 DQS, DQSn pin pairs
•
17 address pins
•
7 command pins (CAS#, RAS#, WE#, CKE, ODT, reset, and CS#)
•
1 CK, CK# pin pair
Maximum Number of DDR3 SDRAM Interfaces Supported per FPGA
Device Type
EP2AGX190
EP2AGX260
Package Pin Count
1,152
•
•
EP2AGX45
EP2AGX65
Arria II GZ
Maximum Number of Interfaces
358
•
•
•
EP2AGZ300
EP2AGZ350
EP2AGZ225
1,517
EP2AGZ300
EP2AGZ350
780
Four ×8 interfaces or one ×72 interface on
each side
No DQ pins on left side
One ×16 interface on both top and bottom
sides
On right side, one ×8 interface
No DQ pins on left side
Four ×8 interfaces on each side
•
•
Three ×8 interfaces on both top and
bottom sides
No DQ pins on left and right sides
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
30
1 Planning Pin and FPGA Resources
Device
Arria V
Arria V GZ
Device Type
Package Pin Count
5AGXB1
5AGXB3
5AGXB5
5AGXB7
5AGTD3
5AGTD7
1,517
5AGXA1
5AGXA3
672
5AGXA5
5AGXA7
672
5AGZE5
5AGZE7
1,517
5AGZE1
5AGZE3
780
Maximum Number of Interfaces
•
•
•
•
•
•
•
•
•
•
•
•
Cyclone V
MAX 10 FPGA
Stratix III
One ×56 interface or two ×24 interfaces
on both top and bottom sides
No DQ pins on the left side
Two ×72 interfaces on both top and
bottom sides
No DQ pins on left and right sides
On top side, four ×8 interfaces or one x72
interface
On bottom side, four ×8 interfaces or one
x72 interface
No DQ pins on left and right sides
1,152
5CEA7
5CGTD7
5CGXC7
484
10M50D672
10M40D672
762
One x32 interface on the right side
10M50D256
10M40D256
10M25D256
10M16D256
256
One x8 interface on the right side
•
•
•
•
1,760
•
•
EP3SE50
484
•
•
Stratix IV
One ×56 interface or two ×24 interfaces
on top and bottom sides
One ×32 interface on the right side
No DQ pins on the left side
5CGTD9
5CEA9
5CGXC9
EP3SL340
•
Two ×72 interfaces on both top and
bottom sides
No DQ pins on left and right sides
EP4SGX290
EP4SGX360
EP4SGX530
1,932
EP4SE530
EP4SE820
1,760
One ×72 interface or two ×32 interfaces
on each of the top, bottom, and right sides
No DQ pins on the left side
One ×48 interface or two ×16 interfaces
on both top and bottom sides
One x8 interface on the right side
No DQ pins on the left side
Two ×72 interfaces on both top and
bottom sides
One ×72 interface on both right and left
sides
Two ×8 interfaces on both top and bottom
sides
Three ×8 interfaces on both right and left
sides
• One ×72 interface on each side
or
• One ×72 interface on each side and 2
additional ×72 wraparound interfaces only
if sharing DLL and PLL resources
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
31
1 Planning Pin and FPGA Resources
Device
Device Type
Package Pin Count
EP4SGX70
EP4SGX110
EP4SGX180
EP4SGX230
Stratix V
780
Maximum Number of Interfaces
•
•
•
5SGXA5
5SGXA7
1,932
5SGXA3
5SGXA4
780
•
Three ×8 interfaces or one ×64 interface
on both top and bottom sides
On left side, one ×48 interface or two ×8
interfaces
No DQ pins on right side
•
Two ×72 interfaces (800 MHz) on both top
and bottom sides
No DQ pins on left and right sides
•
•
•
On top side, two ×8 interfaces
On bottom side, four ×8 interfaces
No DQ pins on left and right sides
Related Links
External Memory Interface Device Selector
1.1.19.4 Maximum Number of QDR II and QDR II+ SRAM Interfaces Supported
per FPGA
The following table lists the maximum number of independent QDR II+ or QDR II
SRAM interfaces that can be fitted in the smallest and biggest devices and pin
packages assuming the device is blank.
One interface of ×36 consists of:
•
36 Q pins
•
36 D pins
•
1 K, K# pin pairs
•
1 CQ, CQ# pin pairs
•
19 address pins
•
4 BSWn pins
•
WPSn, RPSn
One interface of ×9 consists of:
•
9 Q pins
•
9 D pins
•
1 K, K# pin pairs
•
1 CQ, CQ# pin pairs
•
21 address pins
•
1 BWSn pin
•
WPSn, RPSn
External Memory Interface Handbook Volume 2: Design Guidelines
32
1 Planning Pin and FPGA Resources
Table 8.
Device
Arria II GX
Maximum Number of QDR II and QDR II+ SRAM Interfaces Supported per
FPGA
Device Type
EP2AGX190
EP2AGX260
EP2AGX45
EP2AGX65
Arria II GZ
Arria V
Arria V GZ
Stratix III
358
1,517
EP2AGZ300
EP2AGZ350
Maximum Number of Interfaces
One ×36 interface and on ×9 interface one each side
One ×9 interface on each side
No DQ pins on left side
•
Two ×36 interfaces and one ×9 interface on both top and
bottom sides
Four ×9 interfaces on right and left sides
780
•
•
Three ×9 interfaces on both top and bottom sides
No DQ pins on right and left sides
5AGXB1
5AGXB3
5AGXB5
5AGXB7
5AGTD3
5AGTD7
1,517
•
•
Two ×36 interfaces on both top and bottom sides
No DQ pins on left and right sides
5AGXA1
5AGXA3
672
•
•
•
Two ×9 interfaces on both top and bottom sides
One ×9 interface on the right side
No DQ pins on the left side
5AGXA5
5AGXA7
672
•
•
Two ×9 interfaces on both top and bottom sides
No DQ pins on the left side
5AGZE5
5AGZE7
1,517
•
•
Two ×36 interfaces on both top and bottom sides
No DQ pins on left and right sides
5AGZE1
5AGZE3
780
•
•
•
On top side, one ×36 interface or three ×9 interfaces
On bottom side, two ×9 interfaces
No DQ pins on left and right sides
1,760
•
•
Two ×36 interfaces and one ×9 interface on both top and
bottom sides
Five ×9 interfaces on both right and left sides
484
•
•
One ×9 interface on both top and bottom sides
Two ×9 interfaces on both right and left sides
EP4SGX290
EP4SGX360
EP4SGX530
1,932
•
•
Two ×36 interfaces on both top and bottom sides
One ×36 interface on both right and left sides
EP4SE530
EP4SE820
1,760
EP3SL340
EP4SGX70
EP4SGX110
EP4SGX180
EP4SGX230
Stratix V
1,152
EP2AGZ300
EP2AGZ350
EP2AGZ225
EP3SE50
EP3SL50
EP3SL70
Stratix IV
Package Pin
Count
5SGXA5
780
1,932
•
Two ×9 interfaces on each side
No DQ pins on right side
•
•
Two ×36 interfaces on both top and bottom sides
No DQ pins on left and right sides
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
33
1 Planning Pin and FPGA Resources
Device
Device Type
Package Pin
Count
Maximum Number of Interfaces
5SGXA7
5SGXA3
5SGXA4
780
•
•
•
On top side, one ×36 interface or three ×9 interfaces
On bottom side, two ×9 interfaces
No DQ pins on left and right sides
Related Links
External Memory Interface Device Selector
1.1.19.5 Maximum Number of RLDRAM II Interfaces Supported per FPGA
The following table lists the maximum number of independent RLDRAM II interfaces
that can be fitted in the smallest and biggest devices and pin packages assuming the
device is blank.
One common I/O ×36 interface consists of:
•
36 DQ
•
1 DM pin
•
2 DK, DK# pin pairs
•
2 QK, QK# pin pairs
•
1 CK, CK# pin pair
•
24 address pins
•
1 CS# pin
•
1 REF# pin
•
1 WE# pin
One common I/O ×9 interface consists of:
Table 9.
Device
Arria II GZ
•
9 DQ
•
1 DM pins
•
1 DK, DK# pin pair
•
1 QK, QK# pin pair
•
1 CK, CK# pin pair
•
25 address pins
•
1 CS# pin
•
1 REF# pin
•
1 WE# pin
Maximum Number of RLDRAM II Interfaces Supported per FPGA
Device Type
EP2AGZ300
EP2AGZ350
Package Pin
Count
1,517
Maximum Number of RLDRAM II CIO Interfaces
Two ×36 interfaces on each side
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
34
1 Planning Pin and FPGA Resources
Device
Device Type
Package Pin
Count
Maximum Number of RLDRAM II CIO Interfaces
EP2AGZ225
EP2AGZ300
EP2AGZ350
Arria V
Arria V GZ
Stratix III
Stratix IV
•
•
Three ×9 interfaces or one ×36 interface on both top and
bottom sides
No DQ pins on the left and right sides
5AGXB1
5AGXB3
5AGXB5
5AGXB7
5AGTD3
5AGTD7
1,517
•
•
Two ×36 interfaces on both top and bottom sides
No DQ pins on left and right sides
5AGXA1
5AGXA3
672
•
•
•
One ×36 interface on both top and bottom sides
One ×18 interface on the right side
No DQ pins on the left side
5AGXA5
5AGXA7
672
•
•
One ×36 interface on both top and bottom sides
No DQ pins on the left side
5ZGZE5
5ZGZE7
1,517
•
•
Four ×36 interfaces on both top and bottom sides
No DQ pins on left and right sides
5AGZE1
5AGZE3
780
•
•
•
On top side, three ×9 interfaces or two ×36 interfaces
On bottom side, two ×9 interfaces or one ×36 interfaces
No DQ pins on left and right sides
EP3SL340
1,760
•
•
Four ×36 components on both top and bottom sides
Three ×36 interfaces on both right and left sides
EP3SE50
EP3SL50
EP3SL70
484
One ×9 interface on both right and left sides
EP4SGX290
EP4SGX360
EP4SGX530
1,932
•
•
Three ×36 interfaces on both top and bottom sides
Two ×36 interfaces on both right and left sides
EP4SE530
EP4SE820
1,760
•
Three ×36 interfaces on each side
EP4SGX70
EP4SGX110
EP4SGX180
EP4SGX230
Stratix V
780
780
One ×36 interface on each side (no DQ pins on right side)
5SGXA5
5SGXA7
1,932
•
•
Four ×36 interfaces on both top and bottom sides
No DQ pins on left and right sides
5SGXA3
5SGXA4
780
•
•
•
On top side, two ×9 interfaces or one ×18 interfaces
On bottom side, three ×9 interfaces or two ×36 interfaces
No DQ pins on left and right sides
Related Links
External Memory Interface Device Selector
External Memory Interface Handbook Volume 2: Design Guidelines
35
1 Planning Pin and FPGA Resources
1.1.19.6 Maximum Number of LPDDR2 SDRAM Interfaces Supported per FPGA
The following table lists the maximum number of x8 LPDDR2 SDRAM components that
can fit in the smallest and largest devices and pin packages, assuming the device is
blank.
Each interface of size n, where n is a multiple of 8, consists of:
Table 10.
•
n DQ pins (including ECC)
•
n/8 DM pins
•
n/8 DQS, DQSn pin pairs
•
10 address pins
•
2 command pins (CKE and CSn)
•
1 CK, CK# pin pair up to every three x8 LPDDR2 components
Maximum Number of LPDDR2 SDRAM Interfaces Supported per FPGA
Device
Device Type
Arria V
Cyclone V
MAX 10 FPGA
Package Pin
Count
Maximum Number of LPDDR2 SDRAM
Interfaces
5AGXB1
5AGXB3
5AGXB5
5AGXB7
5AGTD3
5AGTD7
1,517
5AGXA1
5AGXA3
672
5AGXA5
5AGXA7
672
5CGTD9
5CEA9
5CGXC9
1,152
5CEA7
5CGTD7
5CGXC7
484
10M50D672
10M40D672
762
One x16 interface on the right side
10M50D256
10M40D256
10M25D256
10M16D256
256
One x16 interface on the right side
•
•
•
•
•
•
•
•
•
•
Related Links
External Memory Interface Device Selector
External Memory Interface Handbook Volume 2: Design Guidelines
36
•
One ×72 interface on both top and bottom
sides
No DQ pins on the left and right sides
One ×64 interface or two ×24 interfaces
on both top and bottom sides
One ×32 interface on the right side
One ×64 interface or two ×24 interfaces
on both the top and bottom sides
No DQ pins on the left side
One ×72 interface or two ×32 interfaces
on each of the top, bottom, and right
sides
No DQ pins on the left side
One ×48 interface or two ×16 interfaces
on both the top and bottom sides
One ×8 interface on the right side
No DQ pins on the left side
1 Planning Pin and FPGA Resources
1.1.20 OCT Support
If the memory interface uses any FPGA OCT calibrated series, parallel, or dynamic
termination for any I/O in your design, you need a calibration block for the OCT
circuitry. This calibration block is not required to be within the same bank or side of
the device as the memory interface pins. However, the block requires a pair of RUP and
RDN or RZQ pins that must be placed within an I/O bank that has the same VCCIO
voltage as the VCCIO voltage of the I/O pins that use the OCT calibration block.
The RZQ pin in Arria 10, Stratix 10, Arria V, Stratix V, and Cyclone V devices can be
used as a general purpose I/O pin when it is not used to support OCT, provided the
signal conforms to the bank voltage requirements.
The RUP and RDN pins in Arria II GX, Arria II GZ, MAX 10, Stratix III, and Stratix IV
devices are dual functional pins that can also be used as DQ and DQS pins in when
they are not used to support OCT, giving the following impacts on your DQS groups:
Note:
•
If the RUP and RDN pins are part of a ×4 DQS group, you cannot use that DQS
group in ×4 mode.
•
If the RUP and RDN pins are part of a ×8 DQS group, you can only use this group in
×8 mode if any of the following conditions apply:
—
You are not using DM or BWSn pins.
—
You are not using a ×8 or ×9 QDR II SRAM device, as the RUP and RDN pins
may have dual purpose function as the CQn pins. In this case, pick different
pin locations for RUP and RDN pins, to avoid conflict with memory interface pin
placement. You have the choice of placing the RUP and RDN pins in the same
bank as the write data pin group or address and command pin group.
—
You are not using complementary or differential DQS pins.
The Altera external memory interface IP does not support ×8 QDR II SRAM devices in
the Quartus Prime software.
A DQS/DQ ×8/×9 group in Arria II GZ, Stratix III, and Stratix IV devices comprises 12
pins. A typical ×8 memory interface consists of one DQS, one DM, and eight DQ pins
which add up to 10 pins. If you choose your pin assignment carefully, you can use the
two extra pins for RUP and RDN. However, if you are using differential DQS, you do not
have enough pins for RUP and RDN as you only have one pin leftover. In this case, as
you do not have to put the OCT calibration block with the DQS or DQ pins, you can pick
different locations for the RUP and RDN pins. As an example, you can place it in the I/O
bank that contains the address and command pins, as this I/O bank has the same
VCCIO voltage as the I/O bank containing the DQS and DQ pins.
There is no restriction when using ×16/×18 or ×32/×36 DQS groups that include the
×4 groups when pin members are used as RUP and RDN pins, as there are enough extra
pins that can be used as DQS or DQ pins.
You must pick your DQS and DQ pins manually for the ×8, ×9, ×16 and ×18, or ×32
and ×36 groups, if they are using RUP and RDN pins within the group. The Quartus
Prime software might not place these pins optimally and might be unable to fit the
design.
External Memory Interface Handbook Volume 2: Design Guidelines
37
1 Planning Pin and FPGA Resources
1.2 Guidelines for Intel Arria® 10 External Memory Interface IP
The Intel Arria® 10 device contains up to two I/O columns that can be used by
external memory interfaces. The Arria 10 I/O subsystem resides in the I/O columns.
Each column contains multiple I/O banks, each of which consists of four I/O lanes. An
I/O lane is a group of twelve I/O ports.
The I/O column, I/O bank, I/O lane, adjacent I/O bank, and pairing pin for every
physical I/O pin can be uniquely identified using the Bank Number and Index
within I/O Bank values which are defined in each Arria 10 device pin-out file.
•
The numeric component of the Bank Number value identifies the I/O column,
while the letter represents the I/O bank.
•
The Index within I/O Bank value falls within one of the following ranges: 0 to
11, 12 to 23, 24 to 35, or 36 to 47, and represents I/O lanes 1, 2, 3, and 4,
respectively.
•
The adjacent I/O bank is defined as the I/O bank with same column number but
the letter is either before or after the respective I/O bank letter in the A-Z system.
•
The pairing pin for an I/O pin is located in the same I/O bank. You can identify the
pairing pin by adding one to its Index within I/O Bank number (if it is an
even number), or by subtracting one from its Index within I/O Bank number
(if it is an odd number).
For example, a physical pin with a Bank Number of 2K and Index within I/O
Bank of 22, indicates that the pin resides in I/O lane 2, in I/O bank 2K, in column 2.
The adjacent I/O banks are 2J and 2L. The pairing pin for this physical pin is the pin
with an Index within I/O Bank of 23 and Bank Number of 2K.
Related Links
Restrictions on I/O Bank Usage for Arria 10 EMIF IP with HPS
1.2.1 General Pin-Out Guidelines for Arria 10 EMIF IP
You should follow the recommended guidelines when performing pin placement for all
external memory interface pins targeting Arria 10 devices, whether you are using the
Altera hard memory controller or your own solution.
If you are using the Altera hard memory controller, you should employ the relative pin
locations defined in the <variation_name>/altera_emif_arch_nf_version
number/<synth|sim>/<variation_name>_altera_emif_arch_nf_version
number_<unique ID>_readme.txt file, which is generated with your IP.
Note:
1. EMIF IP pin-out requirements for the Arria 10 Hard Processor Subsystem (HPS)
are more restrictive than for a non-HPS memory interface. The HPS EMIF IP
defines a fixed pin-out in the Quartus Prime IP file (.qip), based on the IP
configuration. When targeting Arria 10 HPS, you do not need to make location
assignments for external memory interface pins. To obtain the HPS-specific
external memory interface pin-out, compile the interface in the Quartus Prime
software. Alternatively, consult the device handbook or the device pin-out files.
For information on how you can customize the HPS EMIF pin-out, refer to
Restrictions on I/O Bank Usage for Arria 10 EMIF IP with HPS.
2. Ping Pong PHY, PHY only, RLDRAMx , QDRx and LPDDR3 are not supported with
HPS.
External Memory Interface Handbook Volume 2: Design Guidelines
38
1 Planning Pin and FPGA Resources
Observe the following general guidelines for placing pins for your Arria 10 external
memory interface:
1.
Ensure that the pins of a single external memory interface reside within a single
I/O column.
2.
An external memory interface can occupy one or more banks in the same I/O
column. When an interface must occupy multiple banks, ensure that those banks
are adjacent to one another.
3. Be aware that any pin in the same bank that is not used by an external memory
interface is available for use as a general purpose I/O of compatible voltage and
termination settings.
4. All address and command pins and their associated clock pins (CK and CK#) must
reside within a single bank. The bank containing the address and command pins is
identified as the address and command bank.
5. To minimize latency, when the interface uses more than two banks, you must
select the center bank of the interface as the address and command bank.
6. The address and command pins and their associated clock pins in the address and
command bank must follow a fixed pin-out scheme, as defined in the Arria 10
External Memory Interface Pin Information File, which is available on
www.altera.com.
You do not have to place every address and command pin manually. If you assign
the location for one address and command pin, the Fitter automatically places the
remaining address and command pins.
Note: The pin-out scheme is a hardware requirement that you must follow, and
can vary according to the topology of the memory device. Some schemes
require three lanes to implement address and command pins, while others
require four lanes. To determine which scheme to follow, refer to the
messages window during parameterization of your IP, or to the
<variation_name>/altera_emif_arch_nf_<version>/<synth|
sim>/
<variation_name>_altera_emif_arch_nf_<version>_<unique
ID>_readme.txt file after you have generated your IP.
7.
An unused I/O lane in the address and command bank can serve to implement a
data group, such as a x8 DQS group. The data group must be from the same
controller as the address and command signals.
8.
An I/O lane must not be used by both address and command pins and data pins.
9.
Place read data groups according to the DQS grouping in the pin table and pin
planner. Read data strobes (such as DQS and DQS#) or read clocks (such as CQ
and CQ# / QK and QK#) must reside at physical pins capable of functioning as
DQS/CQ and DQSn/CQn for a specific read data group size. You must place the
associated read data pins (such as DQ and Q), within the same group.
External Memory Interface Handbook Volume 2: Design Guidelines
39
1 Planning Pin and FPGA Resources
Note: a.
b.
Unlike other device families, there is no need to swap CQ/CQ# pins in
certain QDR II and QDR II+ latency configurations.
QDR-IV requires that the polarity of all QKB/QKB# pins be swapped with
respect to the polarity of the differential buffer inputs on the FPGA to
ensure correct data capture on port B. All QKB pins on the memory
device must be connected to the negative pins of the input buffers on
the FPGA side, and all QKB# pins on the memory device must be
connected to the positive pins of the input buffers on the FPGA side.
Notice that the port names at the top-level of the IP already reflect this
swap (that is, mem_qkb is assigned to the negative buffer leg, and
mem_qkb_n is assigned to the positive buffer leg).
10. You can use a single I/O lane to implement two x4 DQS groups. The pin table
specifies which pins within an I/O lane can be used for the two pairs of DQS and
DQS# signals. In addition, for x4 DQS groups you must observe the following
rules:
•
There must be an even number of x4 groups in an external memory interface.
•
DQS group 0 and DQS group 1 must be placed in the same I/O lane. Similarly,
DQS group 2 and group 3 must be in the same I/O lane. Generally, DQS group
X and DQS group X+1 must be in the same I/O lane, where X is an even
number.
11. You should place the write data groups according to the DQS grouping in the pin
table and pin planner. Output-only data clocks for QDR II, QDR II+, and QDR II+
Extreme, and RLDRAM 3 protocols need not be placed on DQS/DQSn pins, but
must be placed on a differential pin pair. They must be placed in the same I/O
bank as the corresponding DQS group.
Note: For RLDRAM 3, x36 device, DQ[8:0] and DQ[26:18] are referenced to
DK0/DK0#, and DQ[17:9] and DQ[35:27] are referenced to DK1/DK1#.
12. For protocols and topologies with bidirectional data pins where a write data group
consists of multiple read data groups, you should place the data groups and their
respective write and read clock in the same bank to improve I/O timing.
You do not need to specify the location of every data pin manually. If you assign
the location for the read capture strobe/clock pin pairs, the Fitter will
automatically place the remaining data pins.
13. Ensure that DM/BWS pins are paired with a write data pin by placing one in an I/O
pin and another in the pairing pin for that I/O pin. It is recommended—though not
required—that you follow the same rule for DBI pins, so that at a later date you
have the freedom to repurpose the pin as DM.
Note:
1.
x4 mode does not support DM/DBI, or Arria 10 EMIF IP for HPS.
2.
If you are using an Arria 10 EMIF IP-based RLDRAM II or RLDRAM 3 external
memory interface, you should ensure that all the pins in a DQS group (that is, DQ,
DM, DK, and QK) are placed in the same I/O bank. This requirement facilitates
timing closure and is necessary for successful compilation of your design.
Multiple Interfaces in the Same I/O Column
To place multiple interfaces in the same I/O column, you must ensure that the global
reset signals (global_reset_n) for each individual interface all come from the same
input pin or signal.
External Memory Interface Handbook Volume 2: Design Guidelines
40
1 Planning Pin and FPGA Resources
I/O Banks Selection
•
For each memory interface, select consecutive I/O banks.
•
A memory interface can only span across I/O banks in the same I/O column.
•
Because I/O bank 2A is also employed for configuration-related operations, you
can use it to construct external memory interfaces only when the following
conditions are met:
—
The pins required for configuration related use (such as configuration bus for
Fast Passive Parallel mode or control signals for Partial Reconfiguration) are
never shared with pins selected for EMIF use, even after configuration is
complete.
—
The I/O voltages are compatible.
—
The design has achieved a successful fit in the Quartus Prime software.
Refer to the Arria 10 Device Handbook and the Configuration Function column of
the Pin-Out files for more information about pins and configuration modes.
•
The number of I/O banks that you require depends on the memory interface
width.
•
The 3V I/O bank does not support dynamic OCT or calibrated OCT. To place a
memory interface in a 3V I/O bank, ensure that calibrated OCT is disabled for the
address/command signals, the memory clock signals, and the data bus signals,
during IP generation.
•
In some device packages, the number of I/O pins in some LVDS I/O banks is less
that 48 pins.
Address/Command Pins Location
•
All address/command pins for a controller must be in a single I/O bank.
•
If your interface uses multiple I/O banks, the address/command pins must use the
middle bank. If the number of banks used by the interface is even, any of the two
middle I/O banks can be used for address/command pins.
•
Address/command pins and data pins cannot share an I/O lane but can share an
I/O bank.
•
The address/command pin locations for the soft and hard memory controllers are
predefined. In the External Memory Interface Pin Information for Devices
spreadsheet, each index in the "Index within I/O bank" column denotes a
dedicated address/command pin function for a given protocol. The index number
of the pin specifies to which I/O lane the pin belongs:
—
I/O lane 0—Pins with index 0 to 11
—
I/O lane 1—Pins with index 12 to 23
—
I/O lane 2—Pins with index 24 to 35
—
I/O lane 3—Pins with index 36 to 47
•
For memory topologies and protocols that require only three I/O lanes for the
address/command pins, use I/O lanes 0, 1, and 2.
•
Unused address/command pins in an I/O lane can be used as general-purpose I/O
pins.
External Memory Interface Handbook Volume 2: Design Guidelines
41
1 Planning Pin and FPGA Resources
CK Pins Assignment
Assign the clock pin (CK pin) according to the number of I/O banks in an interface:
•
The number of I/O banks is odd—assign one CK pin to the middle I/O bank.
•
The number of I/O banks is even—assign the CK pin to any one of the middle two
I/O banks.
Although the Fitter can automatically select the required I/O banks, Intel recommends
that you make the selection manually to reduce the pre-fit run time.
PLL Reference Clock Pin Placement
Place the PLL reference clock pin in the address/command bank. Other I/O banks may
not have free pins that you can use as the PLL reference clock pin:
•
If you are sharing the PLL reference clock pin between several interfaces, the I/O
banks must be consecutive.
The Arria 10 External Memory Interface IP does not support PLL cascading.
RZQ Pin Placement
You may place the RZQ pin in any I/O bank in an I/O column with the correct VCCIO
and VCCPT for the memory interface I/O standard in use. The recommended location
is in the address/command I/O bank.
DQ and DQS Pins Assignment
Intel recommends that you assign the DQS pins to the remaining I/O lanes in the I/O
banks as required:
•
Constrain the DQ and DQS signals of the same DQS group to the same I/O lane.
•
DQ signals from two different DQS groups cannot be constrained to the same I/O
lane.
If you do not specify the DQS pins assignment, the Fitter will automatically select the
DQS pins.
Sharing an I/O Bank Across Multiple Interfaces
If you are sharing an I/O bank across multiple external memory interfaces, follow
these guidelines:
•
The interfaces must use the same protocol, voltage, data rate, frequency, and PLL
reference clock.
•
You cannot use an I/O bank as the address/command bank for more than one
interface. The memory controller and sequencer cannot be shared.
•
You cannot share an I/O lane. There is only one DQS input per I/O lane, and an
I/O lane can only connect to one memory controller.
Ping Pong PHY Implementation
The Ping Pong PHY feature instantiates two hard memory controllers—one for the
primary interface and one for the secondary interface. The hard memory controller I/O
bank of the primary interface is used for address and command and is always adjacent
External Memory Interface Handbook Volume 2: Design Guidelines
42
1 Planning Pin and FPGA Resources
and above the hard memory controller I/O bank of the secondary interface. All four
lanes of the primary hard memory controller I/O bank are used for address and
command.
When you use Ping Pong PHY, the EMIF IP exposes two independent Avalon-MM
interfaces to user logic; these interfaces correspond to the two hard memory
controllers inside the interface. Each Avalon-MM interface has its own set of clock and
reset signals. Refer to Qsys Interfaces for more information on the additional signals
exposed by Ping Pong PHY interfaces.
For more information on Ping Pong PHY in Arria 10, refer to Functional Description—
Arria 10 EMIF, in this handbook. For pin allocation information for Arria 10 devices,
refer to External Memory Interface Pin Information for Arria 10 Devices on
www.altera.com.
Additional Requirements for DDR3 and DDR4 Ping-Pong PHY Interfaces
If you are using Ping Pong PHY with a DDR3 or DDR4 external memory interface on an
Arria 10 device, follow these guidelines:
•
The address and command I/O bank must not contain any DQS group.
•
I/O banks that are above the address and command I/O bank must contain only
data pins of the primary interface—that is, the interface with the lower DQS group
indices.
•
The I/O bank immediately below the address and command I/O bank must contain
at least one DQS group of the secondary interface—that is, the interface with the
higher DQS group indices. This I/O bank can, but is not required to, contain DQS
groups of the primary interface.
•
I/O banks that are two or more banks below the address and command I/O bank
must contain only data pins of the secondary interface.
Related Links
•
Pin-Out Files for Intel FPGAs
•
Functional Description—Arria 10 EMIF
•
External Memory Interface Pin Information for Arria 10 Devices
•
Restrictions on I/O Bank Usage for Arria 10 EMIF IP with HPS
1.2.2 Resource Sharing Guidelines for Arria 10 EMIF IP
In Arria 10, different external memory interfaces can share PLL reference clock pins,
core clock networks, I/O banks, and hard Nios processors. Each I/O bank has DLL and
PLL resources, therefore these do not need to be shared. The Fitter automatically
merges DLL and PLL resources when a bank is shared by different external memory
interfaces, and duplicates them for a multi-I/O-bank external memory interface.
Multiple Interfaces in the Same I/O Column
To place multiple interfaces in the same I/O column, you must ensure that the global
reset signals (global_reset_n) for each individual interface all come from the same
input pin or signal.
External Memory Interface Handbook Volume 2: Design Guidelines
43
1 Planning Pin and FPGA Resources
PLL Reference Clock Pin
To conserve pin usage and enable core clock network and I/O bank sharing, you can
share a PLL reference clock pin between multiple external memory interfaces. Sharing
of a PLL reference clock pin also implies sharing of the reference clock network.
Observe the following guidelines for sharing the PLL reference clock pin:
1.
To share a PLL reference clock pin, connect the same signal to the pll_ref_clk
port of multiple external memory interfaces in the RTL code.
2. Place related external memory interfaces in the same I/O column.
3. Place related external memory interfaces in adjacent I/O banks. If you leave an
unused I/O bank between the I/O banks used by the external memory interfaces,
that I/O bank cannot be used by any other external memory interface with a
different PLL reference clock signal.
Note:
The pll_ref_clk pin can be placed in the address and command I/O bank or in a
data I/O bank, there is no impact on timing. However, for greatest flexibility during
debug (such as when creating designs with narrower interfaces), the recommended
placement is in the address and command I/O bank.
Core Clock Network
To access all external memory interfaces synchronously and to reduce global clock
network usage, you may share the same core clock network with other external
memory interfaces.
Observe the following guidelines for sharing the core clock network:
1.
To share a core clock network, connect the clks_sharing_master_out of the
master to the clks_sharing_slave_in of all slaves in the RTL code.
2.
Place related external memory interfaces in the same I/O column.
3. Related external memory interface must have the same rate, memory clock
frequency, and PLL reference clock.
4.
If you are sharing core clocks between a Ping Pong PHY and a hard controller that
have the same protocol, rate, and frequency, the Ping Pong PHY must be the core
clock master.
I/O Bank
To reduce I/O bank utilization, you may share an I/O Bank with other external
memory interfaces.
Observe the following guidelines for sharing an I/O Bank:
1.
Related external memory interfaces must have the same protocol, rate, memory
clock frequency, and PLL reference clock.
2.
You cannot use a given I/O bank as the address and command bank for more than
one external memory interface.
3.
You cannot share an I/O lane between external memory interfaces, but an unused
pin can serve as a general purpose I/O pin, of compatible voltage and termination
standards.
External Memory Interface Handbook Volume 2: Design Guidelines
44
1 Planning Pin and FPGA Resources
Hard Nios Processor
All external memory interfaces residing in the same I/O column will share the same
hard Nios processor. The shared hard Nios processor calibrates the external memory
interfaces serially.
Reset Signal
When multiple external memory interfaces occupy the same I/O column, they must
share the same IP reset signal.
1.3 Guidelines for Intel Stratix® 10 External Memory Interface IP
Intel Stratix® 10 devices contain up to three I/O columns that external memory
interfaces can use. The Stratix 10 I/O subsystem resides in the I/O columns. Each
column contains multiple I/O banks, each of which consists of four I/O lanes. An I/O
lane is a group of twelve I/O ports.
The I/O column, I/O bank, I/O lane, adjacent I/O bank, and pairing pin for every
physical I/O pin can be uniquely identified by the Bank Number and Index within
I/O Bank values, which are defined in each Stratix 10 device pin-out file.
•
The numeric component of the Bank Number value identifies the I/O column,
while the letter represents the I/O bank.
•
The Index within I/O Bank value falls within one of the following ranges: 0 to
11, 12 to 23, 24 to 35, or 36 to 47, and represents I/O lanes 1, 2, 3, and 4,
respectively.
•
The adjacent I/O bank is defined as the I/O bank with same column number but
the letter is either before or after the respective I/O bank letter in the A-Z system.
•
The pairing pin for an I/O pin is located in the same I/O bank. You can identify the
pairing pin by adding one to its Index within I/O Bank number (if it is an
even number), or by subtracting one from its Index within I/O Bank number
(if it is an odd number).
For example, a physical pin with a Bank Number of 2M and Index within I/O
Bank of 22, indicates that the pin resides in I/O lane 2, in I/O bank 2M, in column
2. The adjacent I/O banks are 2L and 2N. The pairing pin for this physical pin is
the pin with an Index within I/O Bank of 23 and Bank Number of 2M
.
1.3.1 General Pin-Out Guidelines for Stratix 10 EMIF IP
You should follow the recommended guidelines when placing pins for all external
memory interface pins targeting Stratix 10 devices, whether you are using the hard
memory controller or your own solution.
If you are using the hard memory controller, you should employ the relative pin
locations defined in the <variation_name>/altera_emif_arch_nd_version
number/<synth|sim>/<variation_name>_altera_emif_arch_nd_version
number_<unique ID>_readme.txt file, which is generated with your IP.
External Memory Interface Handbook Volume 2: Design Guidelines
45
1 Planning Pin and FPGA Resources
Note:
1.
EMIF IP pin-out requirements for the Stratix 10 Hard Processor Subsystem (HPS)
are more restrictive than for a non-HPS memory interface. The HPS EMIF IP
defines a fixed pin-out in the Quartus Prime IP file (.qip), based on the IP
configuration. When targeting Stratix 10 HPS, you do not need to make location
assignments for external memory interface pins. To obtain the HPS-specific
external memory interface pin-out, compile the interface in the Quartus Prime
software. Alternatively, consult the device handbook or the device pin-out files.
For information on how you can customize the HPS EMIF pin-out, refer to
Restrictions on I/O Bank Usage for Stratix 10 EMIF IP with HPS.
2.
Ping Pong PHY, PHY only, RLDRAMx , QDRx and LPDDR3 are not supported with
HPS.
Observe the following guidelines when placing pins for your Stratix 10 external
memory interface:
1. Ensure that the pins of a single external memory interface reside within a single
I/O column.
2.
An external memory interface can occupy one or more banks in the same I/O
column. When an interface must occupy multiple banks, ensure that those banks
are adjacent to one another. (That is, the banks must contain the same column
number and letter before or after the respective I/O bank letter.)
3.
Be aware that any pin in the same bank that is not used by an external memory
interface is available for use as a general purpose I/O of compatible voltage and
termination settings.
4. All address and command pins and their associated clock pins (CK and CK#) must
reside within a single bank. The bank containing the address and command pins is
identified as the address and command bank.
5. To minimize latency, when the interface uses more than two banks, you must
select the center bank of the interface as the address and command bank.
6. The address and command pins and their associated clock pins in the address and
command bank must follow a fixed pin-out scheme, as defined in the Stratix 10
External Memory Interface Pin Information File, which is available on
www.altera.com.
You do not have to place every address and command pin manually. If you assign
the location for one address and command pin, the Fitter automatically places the
remaining address and command pins.
Note: The pin-out scheme is a hardware requirement that you must follow, and
can vary according to the topology of the memory device. Some schemes
require three lanes to implement address and command pins, while others
require four lanes. To determine which scheme to follow, refer to the
messages window during parameterization of your IP, or to the
<variation_name>/altera_emif_arch_nd_<version>/<synth|
sim>/
<variation_name>_altera_emif_arch_nd_<version>_<unique
ID>_readme.txt file after you have generated your IP.
7.
An unused I/O lane in the address and command bank can serve to implement a
data group, such as a x8 DQS group. The data group must be from the same
controller as the address and command signals.
8.
An I/O lane must not be used by both address and command pins and data pins.
External Memory Interface Handbook Volume 2: Design Guidelines
46
1 Planning Pin and FPGA Resources
9.
Place read data groups according to the DQS grouping in the pin table and Pin
Planner. Read data strobes (such as DQS and DQS#) or read clocks (such as CQ
and CQ# / QK and QK#) must reside at physical pins capable of functioning as
DQS/CQ and DQSn/CQn for a specific read data group size. You must place the
associated read data pins (such as DQ and Q), within the same group.
Note: a.
b.
Unlike other device families, there is no need to swap CQ/CQ# pins in
certain QDR II and QDR II+ latency configurations.
QDR-IV requires that the polarity of all QKB/QKB# pins be swapped with
respect to the polarity of the differential buffer inputs on the FPGA to
ensure correct data capture on port B. All QKB pins on the memory
device must be connected to the negative pins of the input buffers on
the FPGA side, and all QKB# pins on the memory device must be
connected to the positive pins of the input buffers on the FPGA side.
Notice that the port names at the top-level of the IP already reflect this
swap (that is, mem_qkb is assigned to the negative buffer leg, and
mem_qkb_n is assigned to the positive buffer leg).
10. You can implement two x4 DQS groups with a single I/O lane. The pin table
specifies which pins within an I/O lane can be used for the two pairs of DQS and
DQS# signals. In addition, for x4 DQS groups you must observe the following
rules:
•
There must be an even number of x4 groups in an external memory interface.
•
DQS group 0 and DQS group 1 must be placed in the same I/O lane. Similarly,
DQS group 2 and group 3 must be in the same I/O lane. Generally, DQS group
X and DQS group X+1 must be in the same I/O lane, where X is an even
number.
11. You should place the write data groups according to the DQS grouping in the pin
table and pin planner. Output-only data clocks for QDR II, QDR II+, and QDR II+
Extreme, and RLDRAM 3 protocols need not be placed on DQS/DQSn pins, but
must be placed on a differential pin pair. They must be placed in the same I/O
bank as the corresponding DQS group.
Note: For RLDRAM 3, x36 device, DQ[8:0] and DQ[26:18] are referenced to
DK0/DK0#, and DQ[17:9] and DQ[35:27] are referenced to DK1/DK1#.
12. For protocols and topologies with bidirectional data pins where a write data group
consists of multiple read data groups, you should place the data groups and their
respective write and read clock in the same bank to improve I/O timing.
You do not need to specify the location of every data pin manually. If you assign
the location for the read capture strobe/clock pin pairs, the Fitter will
automatically place the remaining data pins.
13. Ensure that DM/BWS pins are paired with a write data pin by placing one in an I/O
pin and another in the pairing pin for that I/O pin. It is recommended—though not
required—that you follow the same rule for DBI pins, so that at a later date you
have the freedom to repurpose the pin as DM.
Note:
1.
x4 mode does not support DM/DBI, or Stratix 10 EMIF IP for HPS.
2.
If you are using a Stratix 10 EMIF IP-based RLDRAM 3 external memory interface,
you should ensure that all the pins in a DQS group (that is, DQ, DM, DK, and QK)
are placed in the same I/O bank. This requirement facilitates timing closure and is
necessary for successful compilation of your design.
External Memory Interface Handbook Volume 2: Design Guidelines
47
1 Planning Pin and FPGA Resources
Multiple Interfaces in the Same I/O Column
To place multiple interfaces in the same I/O column, you must ensure that the global
reset signals (global_reset_n) for each individual interface all come from the same
input pin or signal.
I/O Banks Selection
•
For each memory interface, select adjacent I/O banks. (That is, select banks that
contain the same column number and letter before or after the respective I/O
bank letter.)
•
A memory interface can only span across I/O banks in the same I/O column.
•
The number of I/O banks that you require depends on the memory interface
width.
•
In some device packages, the number of I/O pins in some LVDS I/O banks is less
that 48 pins.
Address/Command Pins Location
•
All address/command pins for a controller must be in a single I/O bank.
•
If your interface uses multiple I/O banks, the address/command pins must use the
middle bank. If the number of banks used by the interface is even, any of the two
middle I/O banks can be used for address/command pins.
•
Address/command pins and data pins cannot share an I/O lane but can share an
I/O bank.
•
The address/command pin locations for the soft and hard memory controllers are
predefined. In the External Memory Interface Pin Information for Devices
spreadsheet, each index in the "Index within I/O bank" column denotes a
dedicated address/command pin function for a given protocol. The index number
of the pin specifies to which I/O lane the pin belongs:
—
I/O lane 0—Pins with index 0 to 11
—
I/O lane 1—Pins with index 12 to 23
—
I/O lane 2—Pins with index 24 to 35
—
I/O lane 3—Pins with index 36 to 47
•
For memory topologies and protocols that require only three I/O lanes for the
address/command pins, use I/O lanes 0, 1, and 2.
•
Unused address/command pins in an I/O lane can serve as general-purpose I/O
pins.
CK Pins Assignment
Assign the clock pin (CK pin) according to the number of I/O banks in an interface:
•
The number of I/O banks is odd—assign one CK pin to the middle I/O bank.
•
The number of I/O banks is even—assign the CK pin to any one of the middle two
I/O banks.
Although the Fitter can automatically select the required I/O banks, Intel recommends
that you make the selection manually to reduce the pre-fit run time.
External Memory Interface Handbook Volume 2: Design Guidelines
48
1 Planning Pin and FPGA Resources
PLL Reference Clock Pin Placement
Place the PLL reference clock pin in the address/command bank. Other I/O banks may
not have free pins that you can use as the PLL reference clock pin:
•
If you are sharing the PLL reference clock pin between several interfaces, the I/O
banks must be adjacent. (That is, the banks must contain the same column
number and letter before or after the respective I/O bank letter.)
The Stratix 10 External Memory Interface IP does not support PLL cascading.
RZQ Pin Placement
You may place the RZQ pin in any I/O bank in an I/O column with the correct VCCIO
and VCCPT for the memory interface I/O standard in use. However, it is recommended
to place the RZQ pin in the address/command I/O bank, for greater flexibility during
debug if a narrower interface project is required for testing.
DQ and DQS Pins Assignment
Intel recommends that you assign the DQS pins to the remaining I/O lanes in the I/O
banks as required:
•
Constrain the DQ and DQS signals of the same DQS group to the same I/O lane.
•
DQ signals from two different DQS groups cannot be constrained to the same I/O
lane.
If you do not specify the DQS pins assignment, the Fitter will select the DQS pins
automatically.
Sharing an I/O Bank Across Multiple Interfaces
If you are sharing an I/O bank across multiple external memory interfaces, follow
these guidelines:
•
The interfaces must use the same protocol, voltage, data rate, frequency, and PLL
reference clock.
•
You cannot use an I/O bank as the address/command bank for more than one
interface. The memory controller and sequencer cannot be shared.
•
You cannot share an I/O lane. There is only one DQS input per I/O lane, and an
I/O lane can connect to only one memory controller.
Ping Pong PHY Implementation
The Ping Pong PHY feature instantiates two hard memory controllers—one for the
primary interface and one for the secondary interface. The hard memory controller I/O
bank of the primary interface is used for address and command and is always adjacent
(contains the same column number and letter before or after the respective I/O bank
letter) and above the hard memory controller I/O bank of the secondary interface. All
four lanes of the primary hard memory controller I/O bank are used for address and
command.
When you use Ping Pong PHY, the EMIF IP exposes two independent Avalon-MM
interfaces to user logic; these interfaces correspond to the two hard memory
controllers inside the interface. Each Avalon-MM interface has its own set of clock and
reset signals. Refer to Qsys Interfaces for more information on the additional signals
exposed by Ping Pong PHY interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
49
1 Planning Pin and FPGA Resources
For more information on Ping Pong PHY in Stratix 10, refer to Functional Description—
Stratix 10 EMIF, in this handbook. For pin allocation information for Stratix 10 devices,
refer to External Memory Interface Pin Information for Stratix 10 Devices on
www.altera.com.
Additional Requirements for DDR3 and DDR4 Ping-Pong PHY Interfaces
If you are using Ping Pong PHY with a DDR3 or DDR4 external memory interface on a
Stratix 10 device, follow these guidelines:
•
The address and command I/O bank must not contain any DQS group.
•
I/O banks that are above the address and command I/O bank must contain only
data pins of the primary interface—that is, the interface with the lower DQS group
indices.
•
The I/O bank immediately below the address and command I/O bank must contain
at least one DQS group of the secondary interface—that is, the interface with the
higher DQS group indices. This I/O bank can, but is not required to, contain DQS
groups of the primary interface.
•
I/O banks that are two or more banks below the address and command I/O bank
must contain only data pins of the secondary interface.
1.3.2 Resource Sharing Guidelines for Stratix 10 EMIF IP
In Stratix 10, different external memory interfaces can share PLL reference clock pins,
core clock networks, I/O banks, and hard Nios processors.Each I/O bank has DLL and
PLL resources, therefore these do not need to be shared. The Fitter automatically
merges DLL and PLL resources when a bank is shared by different external memory
interfaces, and duplicates them for a multi-I/O-bank external memory interface.
PLL Reference Clock Pin
To conserve pin usage and enable core clock network and I/O bank sharing, you can
share a PLL reference clock pin between multiple external memory interfaces; the
interfaces must be of the same protocol, rate, and frequency. Sharing of a PLL
reference clock pin also implies sharing of the reference clock network.
Observe the following guidelines for sharing the PLL reference clock pin:
1.
To share a PLL reference clock pin, connect the same signal to the pll_ref_clk
port of multiple external memory interfaces in the RTL code.
2. Place related external memory interfaces in the same I/O column.
3. Place related external memory interfaces in adjacent I/O banks. If you leave an
unused I/O bank between the I/O banks used by the external memory interfaces,
that I/O bank cannot be used by any other external memory interface with a
different PLL reference clock signal.
Note:
You can place the pll_ref_clk pin in the address and command I/O bank or in a
data I/O bank, there is no impact on timing. However, the recommendation is to place
it in the address/command I/O bank.
External Memory Interface Handbook Volume 2: Design Guidelines
50
1 Planning Pin and FPGA Resources
Core Clock Network
To access all external memory interfaces synchronously and to reduce global clock
network usage, you may share the same core clock network with other external
memory interfaces.
Observe the following guidelines for sharing the core clock network:
1.
To share a core clock network, connect the clks_sharing_master_out of the
master to the clks_sharing_slave_in of all slaves in the RTL code.
2.
Place related external memory interfaces in the same I/O column.
3. Related external memory interfaces must have the same rate, memory clock
frequency, and PLL reference clock.
I/O Bank
To reduce I/O bank utilization, you may share an I/O Bank with other external
memory interfaces.
Observe the following guidelines for sharing an I/O Bank:
1.
Related external memory interfaces must have the same protocol, rate, memory
clock frequency, and PLL reference clock.
2.
You cannot use a given I/O bank as the address and command bank for more than
one external memory interface.
3.
You cannot share an I/O lane between external memory interfaces, but an unused
pin can serve as a general purpose I/O pin, of compatible voltage and termination
standards.
Hard Nios Processor
All external memory interfaces residing in the same I/O column will share the same
hard Nios processor. The shared hard Nios processor calibrates the external memory
interfaces serially.
Reset Signal
When multiple external memory interfaces occupy the same I/O column, they must
share the same IP reset signal.
1.4 Guidelines for UniPHY-based External Memory Interface IP
Intel recommends that you place all the pins for one memory interface (attached to
one controller) on the same side of the device. For projects where I/O availability is
limited and you must spread the interface on two sides of the device, place all the
input pins on one side and the output pins on an adjacent side of the device, along
with their corresponding source-synchronous clock.
1.4.1 General Pin-out Guidelines for UniPHY-based External Memory
Interface IP
For best results in laying out your UniPHY-based external memory interface, you
should observe the following guidelines.
External Memory Interface Handbook Volume 2: Design Guidelines
51
1 Planning Pin and FPGA Resources
Note:
For a unidirectional data bus as in QDR II and QDR II+ SRAM interfaces, do not split a
read data pin group or a write data pin group onto two sides. You should also not split
the address and command group onto two sides either, especially when you are
interfacing with QDR II and QDR II+ SRAM burst-length-of-two devices, where the
address signals are double data rate. Failure to adhere to these rules might result in
timing failure.
In addition, there are some exceptions for the following interfaces:
•
×36 emulated QDR II and QDR II+ SRAM in Arria II, Stratix III, and Stratix IV
devices.
•
RLDRAM II and RLDRAM 3 CIO devices.
•
QDR II/+ SDRAM burst-length-of-two devices.
•
You must compile the design in the Quartus Prime software to ensure that you are
not violating signal integrity and Quartus Prime placement rules, which is critical
when you have transceivers in the same design.
The following are general guidelines for placing pins optimally for your memory
interfaces:
1. For Arria II GZ, Arria V, Cyclone V, Stratix III, Stratix IV, and Stratix V designs, if
you are using OCT, the RUP and RDN, or RZQ pins must be in any bank with the
same I/O voltage as your memory interface signals and often use two DQS or DQ
pins from a group. If you decide to place the RUP and RDN, or RZQ pins in a bank
where the DQS and DQ groups are used, place these pins first and then determine
how many DQ pins you have left, to find out if your data pins can fit in the
remaining pins. Refer to OCT Support for Arria II GX, Arria II GZ, Arria V, Arria V
GZ, Cyclone V, Stratix III, Stratix IV, and Stratix V Devices.
2. Use the PLL that is on the same side of the memory interface. If the interface is
spread out on two adjacent sides, you may use the PLL that is located on either
adjacent side. You must use the dedicated input clock pin to that particular PLL as
the reference clock for the PLL. The input of the memory interface PLL cannot
come from the FPGA clock network.
3. The Intel FPGA IP uses the output of the memory interface PLL as the DLL input
reference clock. Therefore, ensure you select a PLL that can directly feed a
suitable DLL.
Note: Alternatively, you can use an external pin to feed into the DLL input
reference clock. The available pins are also listed in the External Memory
Interfaces chapter of the relevant device family handbook. You can also
activate an unused PLL clock output, set it at the desired DLL frequency, and
route it to a PLL dedicated output pin. Connect a trace on the PCB from this
output pin to the DLL reference clock pin, but be sure to include any signal
integrity requirements such as terminations.
4.
Read data pins require the usage of DQS and DQ group pins to have access to the
DLL control signals.
Note: In addition, QVLD pins in RLDRAM II and RLDRAM 3 DRAM, and QDR II+
SRAM must use DQS group pins, when the design uses the QVLD signal.
None of the Intel FPGA IP uses QVLD pins as part of read capture, so
theoretically you do not need to connect the QVLD pins if you are using the
Intel solution. It is good to connect it anyway in case the Intel solution gets
updated to use QVLD pins.
External Memory Interface Handbook Volume 2: Design Guidelines
52
1 Planning Pin and FPGA Resources
5.
In differential clocking (DDR3/DDR2 SDRAM, RLDRAM II, and RLDRAM 3
interfaces), connect the positive leg of the read strobe or clock to a DQS pin, and
the negative leg of the read strobe or clock to a DQSn pin. For QDR II or QDR II+
SRAM devices with 2.5 or 1.5 cycles of read latency, connect the CQ pin to a DQS
pin, and the CQn pin to a CQn pin (and not the DQSn pin). For QDR II or QDR II+
SRAM devices with 2.0 cycles of read latency, connect the CQ pin to a CQn pin,
and the CQn pin to a DQS pin.
6.
Write data (if unidirectional) and data mask pins (DM or BWSn) pins must use
DQS groups. While the DLL phase shift is not used, using DQS groups for write
data minimizes skew, and must use the SW and TCCS timing analysis
methodology.
7.
Assign the write data strobe or write data clock (if unidirectional) in the
corresponding DQS/DQSn pin with the write data groups that place in DQ pins
(except in RLDRAM II and RLDRAM 3 CIO devices). Refer to the Pin-out Rule
Exceptions for your memory interface protocol.
Note: When interfacing with a DDR, or DDR2, or DDR3 SDRAM without leveling,
put the CK and CK# pairs in a single ×4 DQS group to minimize skew
between clocks and maximize margin for the tDQSS, tDSS, and tDSH
specifications from the memory devices.
8. Assign any address pins to any user I/O pin. To minimize skew within the address
pin group, you should assign the address pins in the same bank or side of the
device.
9.
Assign the command pins to any I/O pins and assign the pins in the same bank or
device side as the other memory interface pins, especially address and memory
clock pins. The memory device usually uses the same clock to register address
and command signals.
•
In QDR II and QDR II+ SRAM interfaces where the memory clock also
registers the write data, assign the address and command pins in the same
I/O bank or same side as the write data pins, to minimize skew.
•
For more information about assigning memory clock pins for different device
families and memory standards, refer to Pin Connection Guidelines Tables.
Related Links
•
Pin Connection Guidelines Tables on page 62
The following table lists the FPGA pin utilization for DDR, DDR2, and DDR3
SDRAM without leveling interfaces.
•
Additional Guidelines for Arria V GZ and Stratix V Devices on page 67
This section provides guidelines for improving timing for Arria V GZ and
Stratix V devices and the rules that you must follow to overcome timing
failures.
•
OCT Support on page 37
If the memory interface uses any FPGA OCT calibrated series, parallel, or
dynamic termination for any I/O in your design, you need a calibration block
for the OCT circuitry.
•
Pin-out Rule Exceptions for ×36 Emulated QDR II and QDR II+ SRAM Interfaces in
Arria II, Stratix III and Stratix IV Devices on page 54
A few packages in the Arria II, Arria V GZ, Stratix III, Stratix IV, and Stratix V
device families do not offer any ×32/×36 DQS groups where one read clock or
strobe is associated with 32 or 36 read data pins.
External Memory Interface Handbook Volume 2: Design Guidelines
53
1 Planning Pin and FPGA Resources
•
Pin-out Rule Exceptions for QDR II and QDR II+ SRAM Burst-length-of-two
Interfaces on page 61
If you are using the QDR II and QDR II+ SRAM burst-length-of-two devices,
you may want to place the address pins in a DQS group to minimize skew,
because these pins are now double data rate too.
•
Pin-out Rule Exceptions for RLDRAM II and RLDRAM 3 Interfaces on page 59
RLDRAM II and RLDRAM 3 CIO devices have one bidirectional bus for the data,
but there are two different sets of clocks: one for read and one for write.
1.4.2 Pin-out Rule Exceptions for ×36 Emulated QDR II and QDR II+
SRAM Interfaces in Arria II, Stratix III and Stratix IV Devices
A few packages in the Arria II, Arria V GZ, Stratix III, Stratix IV, and Stratix V device
families do not offer any ×32/×36 DQS groups where one read clock or strobe is
associated with 32 or 36 read data pins. This limitation exists in the following I/O
banks:
•
All I/O banks in U358- and F572-pin packages for all Arria II GX devices
•
All I/O banks in F484-pin packages for all Stratix III devices
•
All I/O banks in F780-pin packages for all Arria II GZ, Stratix III, and Stratix IV
devices; top and side I/O banks in F780-pin packages for all Stratix V and
Arria V GZ devices
•
All I/O banks in F1152-pin packages for all Arria II GZ, Stratix III, and Stratix IV
devices, except EP4SGX290, EP4SGX360, EP4SGX530, EPAGZ300, and EPAGZ350
devices
•
Side I/O banks in F1517- and F1760-pin packages for all Stratix III devices
•
All I/O banks in F1517-pin for EP4SGX180, EP4SGX230, EP4S40G2, EP4S40G5,
EP4S100G2, EP4S100G5, and EPAGZ225 devices
•
Side I/O banks in F1517-, F1760-, and F1932-pin packages for all Arria II GZ and
Stratix IV devices
This limitation limits support for ×36 QDR II and QDR II+ SRAM devices. To support
these memory devices, this following section describes how you can emulate the
×32/×36 DQS groups for these devices.
•
Note:
The maximum frequency supported in ×36 QDR II and QDR II+ SRAM interfaces
using ×36 emulation is lower than the maximum frequency when using a native
×36 DQS group.
The F484-pin package in Stratix III devices cannot support ×32/×36 DQS group
emulation, as it does not support ×16/×18 DQS groups.
To emulate a ×32/×36 DQS group, combine two ×16/×18 DQS groups together. For
×36 QDR II and QDR II+ SRAM interfaces, the 36-bit wide read data bus uses two
×16/×18 groups; the 36-bit wide write data uses another two ×16/×18 groups or four
×8/×9 groups. The CQ and CQn signals from the QDR II and QDR II+ SRAM device
traces are then split on the board to connect to two pairs of CQ/CQn pins in the FPGA.
You might then need to split the QVLD pins also (if you are connecting them). These
connections are the only connections on the board that you need to change for this
implementation. There is still only one pair of K and Kn connections on the board from
the FPGA to the memory (see the following figure). Use an external termination for
External Memory Interface Handbook Volume 2: Design Guidelines
54
1 Planning Pin and FPGA Resources
the CQ/CQn signals at the FPGA end. You can use the FPGA OCT features on the other
QDR II interface signals with ×36 emulation. In addition, there may be extra
assignments to be added with ×36 emulation.
Note:
Other QDR II and QDR II+ SRAM interface rules also apply for this implementation.
You may also combine four ×9 DQS groups (or two ×9 DQS groups and one ×18
group) on the same side of the device, if not the same I/O bank, to emulate a x36
write data group, if you need to fit the QDR II interface in a particular side of the
device that does not have enough ×18 DQS groups available for write data pins. Intel
does not recommend using ×4 groups as the skew may be too large, as you need
eight ×4 groups to emulate the ×36 write data bits.
You cannot combine four ×9 groups to create a ×36 read data group as the loading on
the CQ pin is too large and hence the signal is degraded too much.
When splitting the CQ and CQn signals, the two trace lengths that go to the FPGA pins
must be as short as possible to reduce reflection. These traces must also have the
same trace delay from the FPGA pin to the Y or T junction on the board. The total
trace delay from the memory device to each pin on the FPGA should match the Q
trace delay (I2).
Note:
You must match the trace delays. However, matching trace length is only an
approximation to matching actual delay.
External Memory Interface Handbook Volume 2: Design Guidelines
55
1 Planning Pin and FPGA Resources
Figure 9.
Board Trace Connection for Emulated x36 QDR II and QDR II+ SRAM
Interface
FPGA IOE
DDR
DDR
DDR
DDR
Latch
ena
length = l1
length = l1
DQ (18-bit)
length = l2
D, A
K
Kn
QDR II
Q SRAM
DDR
DQS length = l2 CQ
DQSn length = l2 CQn
DQS Logic
Block
DDR
Latch
ena
length = l1
DQ (18-bit)
length = l2
Q
DDR
1.4.2.1 Timing Impact on x36 Emulation
With ×36 emulation, the CQ/CQn signals are split on the board, so these signals see
two loads (to the two FPGA pins)—the DQ signals still only have one load. The
difference in loading gives some slew rate degradation, and a later CQ/CQn arrival
time at the FPGA pin.
External Memory Interface Handbook Volume 2: Design Guidelines
56
1 Planning Pin and FPGA Resources
The slew rate degradation factor is taken into account during timing analysis when you
indicate in the UniPHY Preset Editor that you are using ×36 emulation mode. However,
you must determine the difference in CQ/CQn arrival time as it is highly dependent on
your board topology.
The slew rate degradation factor for ×36 emulation assumes that CQ/CQn has a
slower slew rate than a regular ×36 interface. The slew rate degradation is assumed
not to be more than 500 ps (from 10% to 90% VCCIO swing). You may also modify
your board termination resistor to improve the slew rate of the ×36-emulated CQ/CQn
signals. If your modified board does not have any slew rate degradation, you do not
need to enable the ×36 emulation timing in the UniPHY-based controller parameter
editor.
For more information about how to determine the CQ/CQn arrival time skew, refer to
Determining the CQ/CQn Arrival Time Skew.
Because of this effect, the maximum frequency supported using x36 emulation is
lower than the maximum frequency supported using a native x36 DQS group.
Related Links
Determining the CQ/CQn Arrival Time Skew on page 57
Before compiling a design in the Quartus Prime software, you need to determine
the CQ/CQn arrival time skew based on your board simulation.
1.4.2.2 Rules to Combine Groups
For devices that do not have four ×16/×18 groups in a single side of the device to
form two ×36 groups for read and write data, you can form one ×36 group on one
side of the device, and another ×36 group on the other side of the device. All the read
groups have to be on the same edge (column I/O or row I/O) and all write groups
have to be on the same type of edge (column I/O or row I/O), so you can have an
interface with the read group in column I/O and the write group in row I/O. The only
restriction is that you cannot combine an ×18 group from column I/O with an ×18
group from row IO to form a x36-emulated group.
For vertical migration with the ×36 emulation implementation, check if migration is
possible and enable device migration in the Quartus Prime software.
Note:
I/O bank 1C in both Stratix III and Stratix IV devices has dual-function configuration
pins. Some of the DQS pins may not be available for memory interfaces if these are
used for device configuration purposes.
Each side of the device in these packages has four remaining ×8/×9 groups. You can
combine four of the remaining for the write side (only) if you want to keep the ×36
QDR II and QDR II+ SRAM interface on one side of the device, by changing the
Memory Interface Data Group default assignment, from the default 18 to 9.
For more information about rules to combine groups for your target device, refer to
the External Memory Interfaces chapter in the respective device handbooks.
1.4.2.3 Determining the CQ/CQn Arrival Time Skew
Before compiling a design in the Quartus Prime software, you need to determine the
CQ/CQn arrival time skew based on your board simulation. You then need to apply this
skew in the report_timing.tcl file of your QDR II and QDR II+ SRAM interface in the
Quartus Prime software.
External Memory Interface Handbook Volume 2: Design Guidelines
57
1 Planning Pin and FPGA Resources
The following figure shows an example of a board topology comparing an emulated
case where CQ is double-loaded and a non-emulated case where CQ only has a single
load.
Figure 10.
Board Simulation Topology Example
Run the simulation and look at the signal at the FPGA pin. The following figure shows
an example of the simulation results from the preceding figure. As expected, the
double-loaded emulated signal, in pink, arrives at the FPGA pin later than the singleloaded signal, in red. You then need to calculate the difference of this arrival time at
VREF level (0.75 V in this case). Record the skew and rerun the simulation in the other
two cases (slow-weak and fast-strong). To pick the largest and smallest skew to be
included in Quartus Prime timing analysis, follow these steps:
1.
Open the <variation_name>_report_timing.tcl and search for
tmin_additional_dqs_variation.
2.
Set the minimum skew value from your board simulation to
tmin_additional_dqs_variation.
3.
Set the maximum skew value from your board simulation to
tmax_additional_dqs_variation.
4. Save the .tcl file.
External Memory Interface Handbook Volume 2: Design Guidelines
58
1 Planning Pin and FPGA Resources
Figure 11.
Board Simulation Results
1.4.3 Pin-out Rule Exceptions for RLDRAM II and RLDRAM 3 Interfaces
RLDRAM II and RLDRAM 3 CIO devices have one bidirectional bus for the data, but
there are two different sets of clocks: one for read and one for write. As the QK and
QK# already occupies the DQS and DQSn pins needed for read, placement of DK and
DK# pins are restricted due to the limited number of pins in the FPGA. This limitations
causes the exceptions to the previous rules, which are discussed below.
The address or command pins of RLDRAM II must be placed in a DQ-group because
these pins are driven by the PHY clock. Half-rate RLDRAM II interfaces and full-rate
RLDRAM 3 interfaces use the PHY clock for both the DQ pins and the address or
command pins.
1.4.3.1 Interfacing with ×9 RLDRAM II CIO Devices
RLDRAM 3 devices do not have the x9 configuration.
RLDRAM II devices have the following pins:
•
2 pins for QK and QK# signals
•
9 DQ pins (in a ×8/×9 DQS group)
•
2 pins for DK and DK# signals
•
1 DM pin
•
14 pins total (15 if you have a QVLD)
External Memory Interface Handbook Volume 2: Design Guidelines
59
1 Planning Pin and FPGA Resources
In the FPGA, the ×8/×9 DQS group consists of 12 pins: 2 for the read clocks and 10
for the data. In this case, move the QVLD (if you want to keep this connected even
though this is not used in the Intel FPGA memory interface solution) and the DK and
DK# pins to the adjacent DQS group. If that group is in use, move to any available
user I/O pins in the same I/O bank.
1.4.3.2 Interfacing with ×18 RLDRAM II and RLDRAM 3 CIO Devices
This topic describes interfacing with x18 RLDRAM II and RLDRAM 3 devices.
RLDRAM II devices have the following pins:
•
4 pins for QK/QK# signals
•
18 DQ pins (in ×8/×9 DQS group)
•
2 pins for DK/DK# signals
•
1 DM pin
•
25 pins total (26 if you have a QVLD)
In the FPGA, you use two ×8/×9 DQS group totaling 24 pins: 4 for the read clocks
and 18 for the read data.
Each ×8/×9 group has one DQ pin left over that can either use QVLD or DM, so one
×8/×9 group has the DM pin associated with that group and one ×8/×9 group has the
QVLD pin associated with that group.
RLDRAM 3 devices have the following pins:
•
4 pins for QK/QK# signals
•
18 DQ pins (in ×8/×9 DQS group)
•
4 pins for DK/DK# signals
•
2 DM pins
•
28 pins total (29 if you have a QVLD)
In the FPGA, you use two ×8/×9 DQS group totaling 24 pins: 4 for the read clocks
and 18 for the read data.
Each ×8/×9 group has one DQ pin left over that can either use QVLD or DM, so one
×8/×9 group has the DM pin associated with that group and one ×8/×9 group has the
QVLD pin associated with that group.
1.4.3.3 Interfacing with RLDRAM II and RLDRAM 3 ×36 CIO Devices
This topic describes interfacing with RLDRAM II and RLDRAM 3 x36 CIO devices.
RLDRAM II devices have the following pins:
•
4 pins for QK/QK# signals
•
36 DQ pins (in x16/x18 DQS group)
•
4 pins for DK/DK# signals
•
1 DM pins
•
46 pins total (47 if you have a QVLD)
External Memory Interface Handbook Volume 2: Design Guidelines
60
1 Planning Pin and FPGA Resources
In the FPGA, you use two ×16/×18 DQS groups totaling 48 pins: 4 for the read clocks
and 36 for the read data. Configure each ×16/×18 DQS group to have:
•
Two QK/QK# pins occupying the DQS/DQSn pins
•
Pick two DQ pins in the ×16/×18 DQS groups that are DQS and DQSn pins in the
×4 or ×8/×9 DQS groups for the DK and DK# pins
•
18 DQ pins occupying the DQ pins
•
There are two DQ pins leftover that you can use for QVLD or DM pins. Put the DM
pin in the group associated with DK[1] and the QVLD pin in the group associated
with DK[0].
•
Check that DM is associated with DK[1] for your chosen memory component.
RLDRAM 3 devices have the following pins:
•
8 pins for QK/QK# signals
•
36 DQ pins (in x8/x9 DQS group)
•
4 pins for DK/DK# signals
•
2 DM pins
•
48 pins total (49 if you have a QVLD)
In the FPGA, you use four ×8/×9 DQS groups.
In addition, observe the following placement rules for RLDRAM 3 interfaces:
For ×18 devices:
•
Use two ×8/×9 DQS groups. Assign the QK/QK# pins and the DQ pins of the same
read group to the same DQS group.
•
DQ, DM, and DK/DK# pins belonging to the same write group should be assigned
to the same I/O sub-bank, for timing closure.
•
Whenever possible, assign CK/CK# pins to the same I/O sub-bank as the DK/DK#
pins, to improve tCKDK timing.
For ×36 devices:
•
Use four ×8/×9 DQS groups. Assign the QK/QK# pins and the DQ pins of the
same read group to the same DQS group.
•
DQ, DM, and DK/DK# pins belonging to the same write group should be assigned
to the same I/O sub-bank, for timing closure.
•
Whenever possible, assign CK/CK# pins to the same I/O sub-bank as the DK/DK#
pins, to improve tCKDK timing.
1.4.4 Pin-out Rule Exceptions for QDR II and QDR II+ SRAM Burst-lengthof-two Interfaces
If you are using the QDR II and QDR II+ SRAM burst-length-of-two devices, you may
want to place the address pins in a DQS group to minimize skew, because these pins
are now double data rate too.
External Memory Interface Handbook Volume 2: Design Guidelines
61
1 Planning Pin and FPGA Resources
The address pins typically do not exceed 22 bits, so you may use one ×18 DQS groups
or two ×9 DQS groups on the same side of the device, if not the same I/O bank. In
Arria V GZ, Stratix III, Stratix IV, and Stratix V devices, one ×18 group typically has
22 DQ bits and 2 pins for DQS/DQSn pins, while one ×9 group typically has 10 DQ bits
with 2 pins for DQS/DQSn pins. Using ×4 DQS groups should be a last resort.
1.4.5 Pin Connection Guidelines Tables
The following table lists the FPGA pin utilization for DDR, DDR2, and DDR3 SDRAM
without leveling interfaces.
Table 11.
FPGA Pin Utilization for DDR, DDR2, and DDR3 SDRAM without Leveling
Interfaces
Interface Pin
Description
Memory System
Clock
Memory Device Pin
Name
CK and CK#
(1) (2)
FPGA Pin Utilization
Arria II GX
Arria II GZ,
Stratix III, and
Stratix IV
Arria V,
Cyclone V,
and Stratix V
MAX 10 FPGA
If you are using
single-ended
DQS signaling,
place any unused
DQ or DQS pins
with DIFFOUT
capability located
in the same bank
or on the same
side as the data
pins.
If you are using
differential DQS
signaling in
UniPHY IP, place
on DIFFOUT in
the same single
DQ group of
adequate width
to minimize
skew.
If you are using
single-ended
DQS signaling,
place any
DIFFOUT pins in
the same bank or
on the same side
as the data pins
If you are using
differential DQS
signaling in
UniPHY IP, place
any DIFFOUT
pins in the same
bank or on the
same side as the
data pins. If
there are multiple
CK/CK# pairs,
place them on
DIFFOUT in the
same single DQ
group of
adequate width.
For example,
DIMMs requiring
three memory
clock pin-pairs
must use a ×4
DQS group.
If you are using
single-ended
DQS signaling,
place any unused
DQ or DQS pins
with DIFFOUT
capability in the
same bank or on
the same side as
the data pins.
If you are using
differential DQS
signaling, place
any unused DQ
or DQS pins with
Place any
differential I/O
pin pair ( DIFFIO)
in the same bank
or on the same
side as the data
pins.
DIFFOUT
capability for the
mem_clk[n:0]
and
mem_clk_n[n:
0] signals (where
n>=0). CK and
CK# pins must
use a pin pair
that has DIFFOUT
capability.
CK and CK# pins
can be in the
same group as
other DQ or DQS
pins. CK and CK#
pins can be
placed such that
one signal of the
differential pair is
in a DQ group
and the other
signal is not.
If there are
multiple CK and
CK# pin pairs,
place them on
DIFFOUT in the
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
62
1 Planning Pin and FPGA Resources
Interface Pin
Description
Memory Device Pin
Name
FPGA Pin Utilization
Arria II GX
Arria II GZ,
Stratix III, and
Stratix IV
Arria V,
Cyclone V,
and Stratix V
MAX 10 FPGA
same single DQ
group of
adequate width.
Clock Source
—
Dedicated PLL clock input pin with direct connection to the
PLL (not using the global clock network).
For Arria II GX, Arria II GZ, Arria V GZ, Stratix III, Stratix
IV and Stratix V Devices, also ensure that the PLL can
supply the input reference clock to the DLL. Otherwise,
refer to alternative DLL input reference clocks (see
General Pin-out Guidelines).
Reset
—
Dedicated clock input pin to accommodate the high fanout signal.
Data
DQ
Data mask
DM
DQ in the pin table, marked as Q in the Quartus Prime Pin
Planner. Each DQ group has a common background color
for all of the DQ and DM pins, associated with DQS (and
DQSn) pins.
Data strobe
DQS or DQS and
DQSn (DDR2 and
DDR2 SDRAM only)
DQS (S in the Quartus Prime Pin Planner) for singleended DQS signaling or DQS and DQSn (S and Sbar in
the Quartus Prime Pin Planner) for differential DQS
signaling. DDR2 supports either single-ended or
differential DQS signaling. DDR3 SDRAM mandates
differential DQS signaling.
Address and
command
A[], BA[], CAS#,
CKE, CS#, ODT,
RAS#, WE#, RESET#
Any user I/O pin. To minimize skew, you must place the
address and command pins in the same bank or side of
the device as the CK/CK# pins, DQ, DQS, or DM pins. The
reset# signal is only available in DDR3 SDRAM interfaces.
Intel devices use the SSTL-15 I/O standard on the
RESET# signal to meet the voltage requirements of 1.5 V
CMOS at the memory device. Intel recommends that you
do not terminate the RESET# signal to VTT.
Notes to Table:
1. The first CK/CK# pair refers to mem_clk[0] or mem_clk_n[0] in the IP core.
2. The restriction on the placement for the first CK/CK# pair is required because this placement
allows the mimic path that the IP VT tracking uses to go through differential I/O buffers to mimic
the differential DQS signals.
Related Links
General Pin-out Guidelines for UniPHY-based External Memory Interface IP on page 51
For best results in laying out your UniPHY-based external memory interface, you
should observe the following guidelines.
1.4.5.1 DDR3 SDRAM With Leveling Interface Pin Utilization Applicable for Arria
V GZ, Stratix III, Stratix IV, and Stratix V Devices
The following table lists the FPGA pin utilization for DDR3 SDRAM with leveling
interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
63
1 Planning Pin and FPGA Resources
Table 12.
DDR3 SDRAM With Leveling Interface Pin Utilization Applicable for Arria V
GZ, Stratix III, Stratix IV, and Stratix V Devices
Interface Pin Description
Memory Device Pin
Name
FPGA Pin Utilization
Data
DQ
Data Mask
DM
Data Strobe
DQS and DQSn
DQS and DQSn (S and Sbar in the Quartus Prime Pin Planner)
Address and Command
A[], BA[], CAS#, CKE, CS#,
ODT, RAS#, WE#,
Any user I/O pin. To minimize skew, you should place address
and command pins in the same bank or side of the device as
the following pins: CK/CK# pins, DQ, DQS, or DM pins.
RESET#
Intel recommends that you use the 1.5V CMOS I/O standard
on the RESET# signal. If your board is already using the
SSTL-15 I/O standard, you do not terminate the RESET#
signal to VTT.
CK and CK#
For controllers with UniPHY IP, you can assign the memory
clock to any unused DIFF_OUT pins in the same bank or on
the same side as the data pins. However, for Arria V GZ and
Stratix V devices, place the memory clock pins to any unused
DQ or DQS pins. Do not place the memory clock pins in the
same DQ group as any other DQ or DQS pins.
If there are multiple CK/CK# pin pairs using Arria V GZ or
Stratix V devices, you must place them on DIFFOUT in the
same single DQ groups of adequate width. For example,
DIMMs requiring three memory clock pin-pairs must use a ×4
DQS group.
Memory system clock
DQ in the pin table, marked as Q in the Quartus Prime Pin
Planner. Each DQ group has a common background color for
all of the DQ and DM pins, associated with DQS (and DQSn)
pins. The ×4 DIMM has the following mapping between DQS
and DQ pins:
• DQS[0] maps to DQ[3:0]
• DQS[9] maps to DQ[7:4]
• DQS[1] maps to DQ[11:8]
• DQS[10] maps to DQ[15:12]
The DQS pin index in other DIMM configurations typically
increases sequentially with the DQ pin index (DQS[0]:
DQ[3:0]; DQS[1]: DQ[7:4]; DQS[2]: DQ[11:8]). In this
DIMM configuration, the DQS pins are indicted this way to
ensure pin out is compatible with both ×4 and ×8 DIMMs.
Placing the multiple CK/CK# pin pairs on DIFFOUT in the
same single DQ groups for Stratix III and Stratix IV devices
improves timing.
Clock Source
—
Dedicated PLL clock input pin with direct (not using a global
clock net) connection to the PLL and optional DLL required by
the interface.
Reset
—
Dedicated clock input pin to accommodate the high fan-out
signal.
1.4.5.2 QDR II and QDR II+ SRAM Pin Utilization for Arria II, Arria V, Stratix III,
Stratix IV, and Stratix V Devices
The following table lists the FPGA pin utilization for QDR II and QDR II+ SRAM
interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
64
1 Planning Pin and FPGA Resources
Table 13.
QDR II and QDR II+ SRAM Pin Utilization for Arria II, Arria V, Stratix III,
Stratix IV, and Stratix V Devices
Interface Pin Description
Memory Device Pin
Name
(1)
FPGA Pin Utilization
Read Clock
CQ and CQ#
For QDR II SRAM devices with 1.5 or 2.5 cycles of read
latency or QDR II+ SRAM devices with 2.5 cycles of read
latency, connect CQ to DQS pin (S in the Quartus Prime Pin
Planner), and CQn to CQn pin (Qbar in the Quartus Prime Pin
Planner).
For QDR II or QDR II+ SRAM devices with 2.0 cycles of read
latency, connect CQ to CQn pin (Qbar in the Quartus Prime
Pin Planner), and CQn to DQS pin (S in the Quartus Prime Pin
Planner).
Arria V devices do not use CQn. The CQ rising and falling
edges are used to clock the read data, instead of separate CQ
and CQn signals.
Read Data
Q
Data Valid
QVLD
Memory and Write Data
Clock
K and K#
Differential or pseudo-differential DQ, DQS, or DQSn pins in
or near the write data group.
Write Data
D
Byte Write Select
BWS#, NWS#
DQ pins. Ensure that you are using the DQ pins associated
with the chosen memory and write data clock pins (DQS and
DQS pins).
Address and Command
A, WPS#, RPS#
Any user I/O pin. To minimize skew, you should place address
and command pins in the same bank or side of the device as
the following pins: K and K# pins, DQ, DQS, BWS#, and
NWS# pins. If you are using burst-length-of-two devices,
place the address signals in a DQS group pin as these signals
are now double data rate.
Clock source
—
Dedicated PLL clock input pin with direct (not using a global
clock net) connection to the PLL and optional DLL required by
the interface.
Reset
—
Dedicated clock input pin to accommodate the high fan-out
signal
DQ pins (Q in the Quartus Prime Pin Planner). Ensure that you
are using the DQ pins associated with the chosen read clock
pins (DQS and CQn pins). QVLD pins are only available for
QDR II+ SRAM devices and note that Intel FPGA IP does not
use the QVLD pin.
Note to table:
1. For Arria V designs with integer latency, connect the CQ# signal to the CQ/CQ# pins from the pin table and ignore the
polarity in the Pin Planner. For Arria V designs with fractional latency, connect the CQ signal to the CQ/CQ# pins from
the pin table.
1.4.5.3 RLDRAM II CIO Pin Utilization for Arria II GZ, Arria V, Stratix III, Stratix
IV, and Stratix V Devices
The following table lists the FPGA pin utilization for RLDRAM II CIO and RLDRAM 3
interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
65
1 Planning Pin and FPGA Resources
Table 14.
RLDRAM II CIO Pin Utilization for Arria II GZ, Arria V, Stratix III, Stratix IV,
and Stratix V Devices and RLDRAM 3 Pin Utilization for Arria V GZ and Stratix
V Devices
Interface Pin Description
Memory Device Pin
Name
(1)
FPGA Pin Utilization
Read Clock
QK and QK#
DQS and DQSn pins (S and Sbar in the Quartus Prime Pin
Planner)
Data
Q
Data Valid
QVLD
Data Mask
DM
Write Data Clock
DK and DK#
DQ pins in the same DQS group as the read data (Q) pins or
in adjacent DQS group or in the same bank as the address
and command pins. For more information, refer to Exceptions
for RLDRAM II and RLDRAM 3 Interfaces. DK/DK# must use
differential output-capable pins.
For Nios-based configuration, the DK pins must be in a DQ
group but the DK pins do not have to be in the same group as
the data or QK pins.
Memory Clock
CK and CK#
Any differential output-capable pins.
For Arria V GZ and Stratix V devices, place any unused DQ or
DQS pins with DIFFOUT capability. Place the memory clock
pins either in the same bank as the DK or DK# pins to
improve DK versus CK timing, or in the same bank as the
address and command pins to improve address command
timing. Do not place CK and CK# pins in the same DQ group
as any other DQ or DQS pins.
Address and Command
A, BA, CS#, REF#, WE#
Any user I/O pins. To minimize skew, you should place
address and command pins in the same bank or side of the
device as the following pins: CK/CK# pins, DQ, DQS, and DM
pins.
Clock source
—
Dedicated PLL clock input pin with direct (not using a global
clock net) connection to the PLL and optional DLL required by
the interface.
Reset
—
Dedicated clock input pin to accommodate the high fan-out
signal
DQ pins (Q in the Quartus Prime Pin Planner). Ensure that you
are using the DQ pins associated with the chosen read clock
pins (DQS and DQSn pins). Intel FPGA IP does not use the
QVLD pin. You may leave this pin unconnected on your board.
You may not be able to fit these pins in a DQS group. For
more information about how to place these pins, refer to
“Exceptions for RLDRAM II and RLDRAM 3 Interfaces” on
page 3–34.
Note to Table:
1. For Arria V devices, refer to the pin table for the QK and QK# pins. Connect QK and QK# signals to the QK and QK#
pins from the pin table and ignore the polarity in the Pin Planner.
Related Links
Pin-out Rule Exceptions for RLDRAM II and RLDRAM 3 Interfaces on page 59
RLDRAM II and RLDRAM 3 CIO devices have one bidirectional bus for the data, but
there are two different sets of clocks: one for read and one for write.
1.4.5.4 LPDDR2 Pin Utilization for Arria V, Cyclone V, and MAX 10 FPGA Devices
The following table lists the FPGA pin utilization for LPDDR2 SDRAM.
External Memory Interface Handbook Volume 2: Design Guidelines
66
1 Planning Pin and FPGA Resources
Table 15.
LPDDR2 Pin Utilization for Arria V, Cyclone V, and MAX 10 FPGA Devices
Interface Pin Description
Memory Device Pin
Name
FPGA Pin Utilization
Memory Clock
CK, CKn
Differential clock inputs. All double data rate (DDR) inputs are
sampled on both positive and negative edges of the CK signal.
Single data rate (SDR) inputs are sampled at the positive
clock edge. Place any unused DQ or DQS pins with DIFFOUT
capability for the mem_clk[n:0] and mem_clk_n[n:0] signals
(where n>=0). Do not place CK and CK# pins in the same
group as any other DQ or DQS pins. If there are multiple CK
and CK# pin pairs, place them on DIFFOUT in the same single
DQ group of adequate width.
Address and Command
CA0-CA9
CSn
CKE
Unidirectional DDR command and address bus inputs. Chip
Select: CSn is considered to be part of the command
code.Clock Enable: CKE HIGH activates and CKE LOW
deactivates internal clock signals and therefore device input
buffers and output drivers. Place address and command pins
in any DDR-capable I/O pin. To minimize skew, Intel
recommends using address and command pins in the same
bank or side of the device as the CK/CK#, DQ. DQS, or DM
pins..
Data
DQ0-DQ7 (×8)
DQ0-DQ15 (×16)
DQ0-DQ31 (×32)
Bidirectional data bus. Pins are used as data inputs and
outputs. DQ in the pin table is marked as Q in the Pin Planner.
Each DQ group has a common background color for all of the
DQ and DM pins associated with DQS (and DQSn) pins. Place
on DQ group pin marked Q in the Pin Planner.
Data Strobe
DQS, DQSn
Data Strobe. The data strobe is bidirectional (used for read
and write data) and differential (DQS and DQSn). It is output
with read data and input with write data. Place on DQS and
DQSn (S and Sbar in the Pin Planner) for differential DQS
signaling.
Data Mask
DM0 (×8)
DM0-DM1 (×16)
DM0-DM3 (×32)
Input Data Mask. DM is the input mask signal for write data.
Input data is masked when DM is sampled HIGH coincident
with that input data during a write access. DM is sampled on
both edges of DQS. DQ in the pin table is marked as Q in the
Pin Planner. Each DQ group has a common background color
for all of the DQ and DM pins, associated with DQS (and
DQSn) pins. Place on DQ group pin marked Q in the Pin
Planner.
Clock Source
—
Dedicated PLL clock input pin with direct (not using a global
clock net) connection to the PLL and optional DLL required by
the interface.
Reset
—
Dedicated clock input pin to accommodate the high fan-out
signal.
1.4.5.5 Additional Guidelines for Arria V GZ and Stratix V Devices
This section provides guidelines for improving timing for Arria V GZ and Stratix V
devices and the rules that you must follow to overcome timing failures.
External Memory Interface Handbook Volume 2: Design Guidelines
67
1 Planning Pin and FPGA Resources
Performing Manual Pin Placement
The following table lists rules that you can follow to perform proper manual pin
placement and avoid timing failures.
The rules are categorized as follows:
Table 16.
•
Mandatory—This rule is mandatory and cannot be violated as it would result in a
no-fit error.
•
Recommended—This rule is recommended and if violated the implementation is
legal but the timing is degraded.
•
Highly Recommended—This rule is not mandatory but is highly recommended
because disregarding this rule might result in timing violations.
Manual Pin Placement Rules
Rules
Frequency
Device
Reason
Mandatory
Must place all CK, CK#,
address, control, and command
pins of an interface in the same
I/O sub-bank.
> 800 MHz
All
For optimum timing, clock and data output
paths must share as much hardware as
possible. For write data pins (for example, DQ/
DQS), the best timing is achieved through the
DQS Groups.
Must not split interface between
top and bottom sides
Any
All
Because PLLs and DLLs on the top edge cannot
access the bottom edge of a device and viceversa.
Must not place pins from
separate interfaces in the same
I/O sub-banks unless the
interfaces share PLL or DLL
resources.
Any
All
All pins require access to the same leveling
block.
Must not share the same PLL
input reference clock unless the
interfaces share PLL or DLL
resources.
Any
All
Because sharing the same PLL input reference
clock forces the same ff-PLL to be used. Each
ff-PLL can drive only one PHY clock tree and
interfaces not sharing a PLL cannot share a
PHY clock tree.
Place all CK, CK#, address,
control, and command pins of
an interface in the same I/O
sub-bank.
<800 MHz
All
Place all CK/CK#, address, control, and
command pins in the same I/O sub-bank when
address and command timing is critical. For
optimum timing, clock and data output paths
should share as much hardware as possible.
For write data pins (for example, DQ/DQS), the
best timing is achieved through the DQS
Groups.
Avoid using I/Os at the device
corners (for example, sub-bank
“A”).
Any
A7
>=800 MHz
All
Corner I/O pins use longer delays, therefore
avoiding corner I/O pins is recommended for
better memory clock performance.
Any
All
Straddling the center PLL causes timing
degradation, because it increases the length of
the PHY clock tree and increases jitter. By not
straddling the center PLL, you can improve
core timing closure.
Recommended
Avoid straddling an interface
across the center PLL.
(1)
The delay from the FPGA core fabric to the I/O
periphery is higher toward the sub-banks in the
corners. By not using I/Os at the device
corners, you can improve core timing closure.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
68
1 Planning Pin and FPGA Resources
Rules
Frequency
Device
Reason
Use the center PLL(f-PLL1) for a
wide interface that must
straddle across center PLL.
>= 800 MHz
All
Using a non-center PLL results in driving a subbank in the opposite quadrant due to long PHY
clock tree delay.
Place the DQS/DQS# pins such
that all DQ groups of the same
interface are next to each other
and do not span across the
center PLL.
Any
All
To ease core timing closure. If the pins are too
far apart then the core logic is also placed
apart which results in difficult timing closure.
Place CK, CK#, address, control,
and command pins in the same
quadrant as DQ groups for
improved timing in general.
Any
All
Place all CK, CK#, address,
control, and command pins of
an interface in the same I/O
sub-bank.
>= 800 MHz
All
For optimum timing, clock and data output
paths should share as much hardware as
possible. For write data pins (for example, DQ/
DQS), the best timing is achieved through the
DQS Groups.
Use center PLL and ensure that
the PLL input reference clock pin
is placed at a location that can
drive the center PLL.
>= 800 MHz
All
Using a non-center PLL results in driving a subbank in the opposite quadrant due to long PHY
clock tree delay.
If center PLL is not accessible,
place pins in the same quadrant
as the PLL.
>= 800 MHz
All
Highly Recommended
Note to Table:
1. This rule is currently applicable to A7 devices only. This rule might be applied to other devices in the future if they show
the same failure.
1.4.5.6 Additional Guidelines for Arria V ( Except Arria V GZ) Devices
This section provides guidelines on how to improve timing for Arria V devices and the
rules that you must follow to overcome timing failures.
Performing Manual Pin Placement
The following table lists rules you can follow to perform proper manual pin placement
and avoid timing failures.
The rules are categorized as follows:
Table 17.
•
Mandatory—This rule is mandatory and cannot be violated as it would result in a
no-fit error.
•
Recommended—This rule is recommended and if violated the implementation is
legal but the timing is degraded.
Manual Pin Placement Rules for Arria V (Except Arria V GZ) Devices
Rules
Frequency
Device
Reason
Mandatory
Must place all CK, CK#, address, control,
and command pins of an interface on the
same device edge as the DQ groups.
All
All
For optimum timing, clock and
data output ports must share as
much hardware as possible.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
69
1 Planning Pin and FPGA Resources
Rules
Frequency
Device
Reason
Must not place pins from separate
interfaces in the same I/O sub-banks
unless the interfaces share PLL or DLL
resources. To share resources, the
interfaces must use the same memory
protocol, frequency, controller rate, and
phase requirements.
All
All
All pins require access to the
same PLL/DLL block.
Must not split interface between top,
bottom, and right sides.
All
All
PHYCLK network support
interfaces at the same side of
the I/O banks only. PHYCLK
networks do not support split
interface.
Place the DQS/DQS# pins such that all DQ
groups of the same interface are next to
each other and do not span across the
center PLL.
All
All
To ease core timing closure. If
the pins are too far apart then
the core logic is also placed
apart which results in difficult
timing closure.
Place all pins for a memory interface in an
I/O bank and use the nearest PLL to that
I/O bank for the memory interface.
All
All
Improve timing performance by
reducing the PHY clock tree
delay.
Recommended
Note:
Not all hard memory controllers on a given device package necessarily have the same
address widths; some hard memory controllers have 16-bit address capability, while
others have only 15-bit addresses.
1.4.5.7 Additional Guidelines for MAX 10 Devices
The following additional guidelines apply when you implement an external memory
interface for a MAX 10 device.
I/O Pins Not Available for DDR3 or LPDDR2 External Memory Interfaces
(Preliminary)
The I/O pins named in the following table are not available for use when implementing
a DDR3 or LPDDR2 external memory interface for a MAX 10 device.
10M16
10M25
F256
U324
F484
F672
N16
R15
U21
—
P16
P15
U22
—
—
R18
M21
—
—
P18
L22
—
—
—
F21
—
—
—
F20
—
—
E16
E19
—
—
D16
F18
—
N16
—
U21
—
P16
—
U22
—
—
—
M21
—
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
70
1 Planning Pin and FPGA Resources
10M50
F256
U324
F484
F672
—
—
L22
—
—
—
F21
—
—
—
F20
—
—
—
E19
—
—
—
F18
—
—
—
F17
—
—
—
E17
—
—
—
—
W23
—
—
—
W24
—
—
—
U25
—
—
—
U24
N16
—
U21
T24
P16
—
U22
R25
—
—
M21
R24
—
—
L22
P25
—
—
F21
K23
—
—
F20
K24
—
—
E19
J23
—
—
F18
H23
—
—
F17
G23
—
—
E17
F23
—
—
—
G21
—
—
—
G22
Additional Restrictions on I/O Pin Availability
The following restrictions are in addition to those represented in the above table.
•
When implementing a DDR3 or LPDDR2 external memory interface, you can use
only 75 percent of the remaining I/O pins in banks 5 and 6 for normal I/O
operations.
•
When implementing a DDR2 external memory interface, 25 percent of the
remaining I/O pins in banks 5 and 6 can be assigned only as input pins.
External Memory Interface Handbook Volume 2: Design Guidelines
71
1 Planning Pin and FPGA Resources
MAX 10 Board Design Considerations
•
For DDR2, DDR3, and LPDDR2 interfaces, the maximum board skew between pins
must be lower than 40 ps. This guideline applies to all pins (address, command,
clock, and data).
•
To minimize unwanted inductance from the board via, Intel recommends that you
keep the PCB via depth for VCCIO banks below 49.5 mil.
•
For devices with DDR3 interface implementation, onboard termination is required
for the DQ, DQS, and address signals. Intel recommends that you use termination
resistor value of 80 Ω to VTT.
•
For the DQ, address, and command pins, keep the PCB trace routing length less
than six inches for DDR3, or less than three inches for LPDDR2.
Power Supply Variation for LPDDR2 Interfaces
For an LPDDR2 interface that targets 200 MHz, constrain the memory device I/O and
core power supply variation to within ±3%.
1.4.5.8 Additional Guidelines for Cyclone V Devices
This topic provides guidelines for improving performance for Cyclone V devices.
I/O Pins Connect to Ground for Hard Memory Interface Operation
According to the Cyclone V pin-out file, there are some general I/O pins that are
connected to ground for hard memory interface operation. These I/O pins should be
grounded to reduce crosstalk from neighboring I/O pins and to ensure the
performance of the hard memory interface.
The grounded user I/O pins can also be used as regular I/O pins if you run short of
available I/O pins; however, the hard memory interface performance will be reduced if
these pins are not connected to ground.
1.4.6 PLLs and Clock Networks
The exact number of clocks and PLLs required in your design depends greatly on the
memory interface frequency, and on the IP that your design uses.
For example, you can build simple DDR slow-speed interfaces that typically require
only two clocks: system and write. You can then use the rising and falling edges of
these two clocks to derive four phases (0°, 90°, 180°, and 270°). However, as clock
speeds increase, the timing margin decreases and additional clocks are required, to
optimize setup and hold and meet timing. Typically, at higher clock speeds, you need
to have dedicated clocks for resynchronization, and address and command paths.
Intel FPGA memory interface IP uses one PLL, which generates the various clocks
needed in the memory interface data path and controller, and provides the required
phase shifts for the write clock and address and command clock. The PLL is
instantiated when you generate the Intel FPGA memory IPs.
By default, the memory interface IP uses the PLL to generate the input reference clock
for the DLL, available in all supported device families. This method eliminates the need
of an extra pin for the DLL input reference clock.
External Memory Interface Handbook Volume 2: Design Guidelines
72
1 Planning Pin and FPGA Resources
The input reference clock to the DLL can come from certain input clock pins or clock
output from certain PLLs.
Note:
Intel recommends using integer PLLs for memory interfaces; handbook specifications
are based on integer PLL implementations.
For the actual pins and PLLs connected to the DLLs, refer to the External Memory
Interfaces chapter of the relevant device family handbook.
You must use the PLL located in the same device quadrant or side as the memory
interface and the corresponding dedicated clock input pin for that PLL, to ensure
optimal performance and accurate timing results from the Quartus Prime software.
The input clock to the PLL can fan out to logic other than the PHY, so long as the clock
input pin to the PLL is a dedicated input clock path, and you ensure that the clock
domain transfer between UniPHY and the core logic is clocked by the reference clock
going into a global clock.
1.4.6.1 Number of PLLs Available in Intel Device Families
The following table lists the number of PLLs available in Intel device families.
Table 18.
Number of PLLs Available in Intel Device Families
Device Family
Enhanced PLLs Available
Arria II GX
4-6
Arria II GZ
3-8
Arria V
16-24
Arria V GZ (fPLL)
22-28
Cyclone V
4-8
MAX 10 FPGA
1-4
Stratix III
4-12
Stratix IV
3-12
Stratix V (fPLL)
22-28
Note to Table:
1. For more details, refer to the Clock Networks and PLL chapter of the respective device family handbook.
1.4.6.2 Number of Enhanced PLL Clock Outputs and Dedicated Clock Outputs
Available in Intel Device Families
The following table lists the number of enhanced PLL clock outputs and dedicated
clock outputs available in Intel device families.
Table 19.
Number of Enhanced PLL Clock Outputs and Dedicated Clock Outputs
Available in Intel Device Families (1)
Device Family
Arria II GX
(2)
Number of Enhanced PLL Clock
Outputs
7 clock outputs each
Number Dedicated Clock Outputs
1 single-ended or 1 differential pair
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
73
1 Planning Pin and FPGA Resources
Device Family
Number of Enhanced PLL Clock
Outputs
Number Dedicated Clock Outputs
3 single-ended or 3 differential pair total
(3)
Arria V
18 clock outputs each
4 single-ended or 2 single-ended and 1
differential pair
Stratix III
Left/right: 7 clock outputs
Top/bottom: 10 clock outputs
Left/right: 2 single-ended or 1 differential pair
Top/bottom: 6 single-ended or 4 single-ended
and 1 differential pair
Arria II GZ and Stratix IV
Left/right: 7 clock outputs
Top/bottom: 10 clock outputs
Left/right: 2 single-ended or 1 differential pair
Top/bottom: 6 single-ended or 4 single-ended
and 1 differential pair
Arria V GZ and Stratix V
18 clock outputs each
4 single-ended or 2 single-ended and 1
differential pair
Notes to Table:
1. For more details, refer to the Clock Networks and PLL chapter of the respective device family handbook.
2. PLL_5 and PLL_6 of Arria II GX devices do not have dedicated clock outputs.
3. The same PLL clock outputs drives three single-ended or three differential I/O pairs, which are only supported in PLL_1
and PLL_3 of the EP2AGX95, EP2AGX125, EP2AGX190, and EP2AGX260 devices.
1.4.6.3 Number of Clock Networks Available in Intel Device Families
The following table lists the number of clock networks available in Intel device
families.
Table 20.
Number of Clock Networks Available in Intel Device Families
Device Family
Global Clock Network
(1)
Regional Clock Network
Arria II GX
16
48
Arria II GZ
16
64–88
Arria V
16
88
Arria V GZ
16
92
Cyclone V
16
N/A
MAX 10 FPGA
10
Stratix III
16
64–88
Stratix IV
16
64–88
Stratix V
16
92
Note to Table:
1. For more information on the number of available clock network resources per device quadrant to better understand the
number of clock networks available for your interface, refer to the Clock Networks and PLL chapter of the respective
device family handbook.
Note:
You must decide whether you need to share clock networks, PLL clock outputs, or PLLs
if you are implementing multiple memory interfaces.
1.4.6.4 Clock Network Usage in UniPHY-based Memory Interfaces—DDR2 and
DDR3 SDRAM (1) (2)
The following table lists clock network usage in UniPHY-based memory interfaces for
DDR2 and DDR3 protocols.
External Memory Interface Handbook Volume 2: Design Guidelines
74
1 Planning Pin and FPGA Resources
Table 21.
Clock Network Usage in UniPHY-based Memory Interfaces—DDR2 and DDR3
SDRAM
Device
DDR3 SDRAM
DDR2 SDRAM
Half-Rate
Half-Rate
Number of full-rate
clock
Number of half-rate
clock
Number of full-rate
clock
Number of half-rate
clock
Stratix III
3 global
1 global
1 regional
1 global
2 global
1 global
1 regional
Arria II GZ and Stratix
IV
3 global
1 global
1 regional
1 regional
2 regional
1 global
1 regional
Arria V GZ and Stratix
V
1 global
2 regional
2 global
1 regional
2 regional
2 global
Notes to Table:
1. There are two additional regional clocks, pll_avl_clk and pll_config_clk for DDR2 and DDR3 SDRAM with
UniPHY memory interfaces.
2. In multiple interface designs with other IP, the clock network might need to be modified to get a design to fit. For more
information, refer to the Clock Networks and PLLs chapter in the respective device handbooks.
1.4.6.5 Clock Network Usage in UniPHY-based Memory Interfaces—RLDRAM II,
and QDR II and QDR II+ SRAM
The following table lists clock network usage in UniPHY-based memory interfaces for
RLDRAM II, QDR II, and QDR II+ protocols.
Table 22.
Clock Network Usage in UniPHY-based Memory Interfaces—RLDRAM II, and
QDR II and QDR II+ SRAM
Device
RLDRAM II
QDR II/QDR II+ SRAM
Half-Rate
Number of
full-rate clock
Number of
half-rate
clock
Full-Rate
Half-Rate
Number of
full-rate clock
Number of
full-rate clock
Number of
half-rate
clock
Full-Rate
Number of
full-rate clock
Arria II GX
—
—
—
2 global
2 global
4 global
Stratix III
2 regional
1 global
1 regional
1 global
2 regional
1 global
1 regional
2 regional
1 global
2 regional
Arria II GZ and
Stratix IV
2 regional
1 global
1 regional
1 global
2 regional
1 global
1 regional
2 regional
1 global
2 regional
Note:
For more information about the clocks used in UniPHY-based memory standards, refer
to the Functional Description—UniPHY chapter in volume 3 of the External Memory
Interface Handbook.
Related Links
Functional Description—UniPHY
1.4.6.6 PLL Usage for DDR, DDR2, and DDR3 SDRAM Without Leveling Interfaces
The following table lists PLL usage for DDR, DDR2, and DDR3 protocols without
leveling interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
75
1 Planning Pin and FPGA Resources
Table 23.
PLL Usage for DDR, DDR2, and DDR3 SDRAM Without Leveling Interfaces
Clock
C0
Arria II GX Devices
•
•
•
C1
•
•
phy_clk_1x in full-rate designs
aux_full_rate_clk
mem_clk_2x to generate DQS and CK/CK# signals
ac_clk_2x
•
cs_n_clk_2x
•
Unused
•
•
C2
C3
phy_clk_1x in half-rate designs
aux_half_rate_clk
PLL scan_clk
•
Stratix III and Stratix IV Devices
•
•
phy_clk_1x in half-rate designs
aux_half_rate_clk
PLL scan_clk
•
mem_clk_2x
•
•
phy_clk_1x in full-rate designs
•
aux_full_rate_clk
•
write_clk_2x
•
write_clk_2x (for DQ)
ac_clk_2x
•
cs_n_clk_2x
C4
•
resync_clk_2x
•
resync_clk_2x
C5
•
measure_clk_2x
•
measure_clk_1x
C6
—
•
ac_clk_1x
1.4.6.7 PLL Usage for DDR3 SDRAM With Leveling Interfaces
The following table lists PLL usage for DDR3 protocols with leveling interfaces.
Table 24.
PLL Usage for DDR3 SDRAM With Leveling Interfaces
Clock
C0
Stratix III and Stratix IV Devices
•
•
phy_clk_1x in half-rate designs
aux_half_rate_clk
PLL scan_clk
C1
•
mem_clk_2x
C2
•
aux_full_rate_clk
C3
•
write_clk_2x
C4
•
resync_clk_2x
C5
•
measure_clk_1x
C6
•
ac_clk_1x
•
1.5 Using PLL Guidelines
When using PLL for external memory interfaces, you must consider the following
guidelines:
External Memory Interface Handbook Volume 2: Design Guidelines
76
1 Planning Pin and FPGA Resources
•
For the clock source, use the clock input pin specifically dedicated to the PLL that
you want to use with your external memory interface. The input and output pins
are only fully compensated when you use the dedicated PLL clock input pin. If the
clock source for the PLL is not a dedicated clock input pin for the dedicated PLL,
you would need an additional clock network to connect the clock source to the PLL
block. Using additional clock network may increase clock jitter and degrade the
timing margin.
•
Pick a PLL and PLL input clock pin that are located on the same side of the device
as the memory interface pins.
•
Share the DLL and PLL static clocks for multiple memory interfaces provided the
controllers are on the same or adjacent side of the device and run at the same
memory clock frequency.
•
If your design uses a dedicated PLL to only generate a DLL input reference clock,
you must set the PLL mode to No Compensation in the Quartus Prime software
to minimize the jitter, or the software forces this setting automatically. The PLL
does not generate other output, so it does not need to compensate for any clock
path.
•
If your design cascades PLL, the source (upstream) PLL must have a
low-bandwidth setting, while the destination (downstream) PLL must have a
high-bandwidth setting to minimize jitter. Intel does not recommend using
cascaded PLLs for external memory interfaces because your design gets
accumulated jitters. The memory output clock may violate the memory device
jitter specification.
•
Use cascading PLLs at your own risk. For more information, refer to “PLL
Cascading”.
•
If you are using Arria II GX devices, for a single memory instance that spans two
right-side quadrants, use a middle-side PLL as the source for that interface.
•
If you are using Arria II GZ, Arria V GZ, Stratix III, Stratix IV, or Stratix V devices,
for a single memory instance that spans two top or bottom quadrants, use a
middle top or bottom PLL as the source for that interface. The ten dual regional
clocks that the single interface requires must not block the design using the
adjacent PLL (if available) for a second interface.
Related Links
PLL Cascading on page 77
Arria II GZ PLLs, Stratix III PLLs, Stratix IV PLLs, Stratix V and Arria V GZ
fractional PLLs (fPLLs), and the two middle PLLs in Arria II GX EP2AGX95,
EP2AGX125, EP2AGX190, and EP2AGX260 devices can be cascaded using either
the global or regional clock trees, or the cascade path between two adjacent PLLs.
1.6 PLL Cascading
Arria II GZ PLLs, Stratix III PLLs, Stratix IV PLLs, Stratix V and Arria V GZ fractional
PLLs (fPLLs), and the two middle PLLs in Arria II GX EP2AGX95, EP2AGX125,
EP2AGX190, and EP2AGX260 devices can be cascaded using either the global or
regional clock trees, or the cascade path between two adjacent PLLs.
Note:
Use cascading PLLs at your own risk. You should use faster memory devices to
maximize timing margins.
External Memory Interface Handbook Volume 2: Design Guidelines
77
1 Planning Pin and FPGA Resources
The UniPHY IP supports PLL cascading using the cascade path without any additional
timing derating when the bandwidth and compensation rules are followed. The timing
constraints and analysis assume that there is no additional jitter due to PLL cascading
when the upstream PLL uses no compensation and low bandwidth, and the
downstream PLL uses no compensation and high bandwidth.
The UniPHY IP does not support PLL cascading using the global and regional clock
networks. You can implement PLL cascading at your own risk without any additional
guidance and specifications from Intel. The Quartus Prime software does issue a
critical warning suggesting use of the cascade path to minimize jitter, but does not
explicitly state that Intel does not support cascading using global and regional clock
networks.
Some Arria II GX devices (EP2AGX95, EP2AGX125, EP2AGX190, and EP2AGX260)
have direct cascade path for two middle right PLLs. Arria II GX PLLs have the same
bandwidth options as Stratix IV GX left and right PLLs.
The Arria 10 External Memory Interface IP does not support PLL cascading.
1.7 DLL
The Intel FPGA memory interface IP uses one DLL. The DLL is located at the corner of
the device and can send the control signals to shift the DQS pins on its adjacent sides
for Stratix-series devices, or DQS pins in any I/O banks in Arria II GX devices.
For example, the top-left DLL can shift DQS pins on the top side and left side of the
device. The DLL generates the same phase shift resolution for both sides, but can
generate different phase offset to the two different sides, if needed. Each DQS pin can
be configured to use or ignore the phase offset generated by the DLL.
The DLL cannot generate two different phase offsets to the same side of the device.
However, you can use two different DLLs to for this functionality.
DLL reference clocks must come from either dedicated clock input pins located on
either side of the DLL or from specific PLL output clocks. Any clock running at the
memory frequency is valid for the DLLs.
To minimize the number of clocks routed directly on the PCB, typically this reference
clock is sourced from the memory controllers PLL. In general, DLLs can use the PLLs
directly adjacent to them (corner PLLs when available) or the closest PLL located in
the two sides adjacent to its location.
Note:
By default, the DLL reference clock in Intel FPGA external memory IP is from a PLL
output.
When designing for 780-pin packages with EP3SE80, EP3SE110, EP3SL150,
EP4SE230, EP4SE360, EP4SGX180, and EP4SGX230 devices, the PLL to DLL reference
clock connection is limited. DLL2 is isolated from a direct PLL connection and can only
receive a reference clock externally from pins CLK[11:4]p in EP3SE80, EP3SE110,
EP3SL150, EP4SE230, and EP4SE360 devices. In EP4SGX180 and EP4SGX230 devices,
DLL2 and DLL3 are not directly connected to PLL. DLL2 and DLL3 receive a reference
clock externally from pins CLK[7:4]p and CLK[15:12]p respectively.
For more DLL information, refer to the respective device handbooks.
External Memory Interface Handbook Volume 2: Design Guidelines
78
1 Planning Pin and FPGA Resources
The DLL reference clock should be the same frequency as the memory interface, but
the phase is not important.
The required DQS capture phase is optimally chosen based on operating frequency
and external memory interface type (DDR, DDR2, DDR3 SDRAM, and QDR II SRAM, or
RLDRAM II). As each DLL supports two possible phase offsets, two different memory
interface types operating at the same frequency can easily share a single DLL. More
may be possible, depending on the phase shift required.
Intel FPGA memory IP always specifies a default optimal phase setting, to override
this setting, refer to Implementing and Parameterizing Memory IP .
When sharing DLLs, your memory interfaces must be of the same frequency. If the
required phase shift is different amongst the multiple memory interfaces, you can use
a different delay chain in the DQS logic block or use the DLL phase offset feature.
To simplify the interface to IP connections, multiple memory interfaces operating at
the same frequency usually share the same system and static clocks as each other
where possible. This sharing minimizes the number of dedicated clock nets required
and reduces the number of different clock domains found within the same design.
As each DLL can directly drive four banks, but each PLL only has complete C (output)
counter coverage of two banks (using dual regional networks), situations can occur
where a second PLL operating at the same frequency is required. As cascaded PLLs
increase jitter and reduce timing margin, you are advised to first ascertain if an
alternative second DLL and PLL combination is not available and more optimal.
Select a DLL that is available for the side of the device where the memory interface
resides. If you select a PLL or a PLL input clock reference pin that can also serve as
the DLL input reference clock, you do not need an extra input pin for the DLL input
reference clock.
Related Links
Implementing and Parameterizing Memory IP on page 186
The following topics describe the general overview of the IP core design flow to
help you quickly get started with any IP core.
1.8 Other FPGA Resources
The Intel FPGA memory interface IP uses FPGA fabric, including registers and the
Memory Block to implement the memory interface.
For resource utilization examples to ensure that you can fit your other modules in the
device, refer to the “Resource Utilization” section in the Introduction to UniPHY IP
chapter of the External Memory Interface Handbook.
One OCT calibration block is used if you are using the FPGA OCT feature in the
memory interface.The OCT calibration block uses two pins (RUP and RDN), or single
pin (RZQ) (“OCT Support for Arria II GX, Arria II GZ, Arria V, Arria V GZ, Cyclone V,
Stratix III, Stratix IV, and Stratix V Devices”). You can select any of the available OCT
calibration block as you do not need to place this block in the same bank or device
side of your memory interface. The only requirement is that the I/O bank where you
place the OCT calibration block uses the same VCCIO voltage as the memory
interface. You can share multiple memory interfaces with the same OCT calibration
block if the VCCIO voltage is the same.
External Memory Interface Handbook Volume 2: Design Guidelines
79
1 Planning Pin and FPGA Resources
Related Links
OCT Support on page 37
If the memory interface uses any FPGA OCT calibrated series, parallel, or dynamic
termination for any I/O in your design, you need a calibration block for the OCT
circuitry.
1.9 Document Revision History
Version
Date
May 2017
2017.05.08
Changes
•
•
Added Guidelines for Stratix 10 External
Memory Interface IP, General Pin-Out
Guidelines for Stratix 10 EMIF IP, and
Resource Sharing Guidelines for Stratix 10
EMIF IP sections.
Rebranded as Intel.
October 2016
2016.10.31
•
Removed paragraph from Address and
Command description in several pin utilization
tables.
May 2016
2016.05.02
•
Modified Data Strobe and Address data in
UDIMM, RDIMM, and LRDIMM Pin Options for
DDR4 table in DDR, DDR2, DDR3, and DDR4
SDRAM DIMM Options. Added notes to table.
November 2015
2015.11.02
•
Changed instances of Quartus II to Quartus
Prime.
Modified I/O Banks Selection, PLL Reference
Clock and RZQ Pins Placement, and Ping Pong
PHY Implementation sections in General PinOut Guidelines for Arria 10 EMIF IP.
Added Additional Requirements for DDR3 and
DDR4 Ping-Pong PHY Interfaces in General
Pin-Out Guidelines for Arria 10 EMIF IP.
Removed references to OCT Blocks from
Resource Sharing Guidelines for Arria 10 EMIF
IP section.
Added LPDDR3.
•
•
•
•
May 2015
2015.05.04
•
•
•
Removed the F672 package of the 10M25
device.
Updated the additional guidelines for MAX 10
devices to improve clarity.
Added related information link to the MAX 10
FPGA Signal Integrity Design Guidelines for
the Additional Guidelines for MAX 10 Devices
topic.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
80
1 Planning Pin and FPGA Resources
Date
December 2014
Version
2014.12.15
Changes
•
•
•
•
August 2014
2014.08.15
•
•
•
•
•
•
December 2013
2013.12.16
•
•
General Pin-Out Guidelines for Arria 10 EMIF
IP section:
— Added note to step 10.
— Removed steps 13 and 14.
— Added a bullet point to Address/Command
Pins Location.
— Added Ping Pong PHY Implementation
— Added parenthetical comment to fifth
bullet point in I/O Banks Selection
— Added note following the procedure,
advising that all pins in a DQS group
should reside in the same I/O bank, for
RLDRAM II and RLDRAM 3 interfaces.
Added QDR IV SRAM Clock Signals, QDR IV
SRAM Commands and Addresses, AP, and
AINV Signals, and QDR IV SRAM Data, DINV,
and QVLD Signals topics.
Added note to Estimating Pin Requirements
section.
DDR, DDR2, DDR3, and DDR4 SDRAM DIMM
Options section:
— Added UDIMM, RDIMM, and LRDIMM Pin
Options for DDR4 table.
— Changed notes to LRDIMM Pin Options for
DDR, DDR2, and DDR3 table.
— Removed reference to Chip ID pin.
Made several changes to Pin Counts for
Various Example Memory Interfaces table:
— Added DDR4 SDRAM and RLDRAM 3 CIO.
— Removed x72 rows from table entries for
DDR, DDR2, and DDR3.
— Added Arria 10 to note 11.
— Added notes 12-18.
Added DDR4 to descriptions of:
— Clock signals
— Command and address signals
— Data, data strobe, DM/DBI, and optional
ECC signals
— SDRAM DIMM options
Added QDR II+ Xtreme to descriptions of:
— SRAM clock signals
— SRAM command signals
— SRAM address signals
— SRAM data, BWS, and QVLD signals
Changed title of section OCT Support for Arria
II GX, Arria II GZ, Arria V, Arria V GZ, Cyclone
V, Stratix III, Stratix IV, and Stratix V Devices
to OCT Support.
Reorganized chapter to have separate sections
for Guidelines for Arria 10 External Memory
Interface IP and Guidelines for UniPHY-based
External Memory Interface IP.
Revised Arria 10-specific guidelines.
Removed references to ALTMEMPHY and
HardCopy.
Removed references to Cyclone III and
Cyclone IV devices.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
81
1 Planning Pin and FPGA Resources
Date
Version
Changes
November 2012
6.0
•
•
•
Added Arria V GZ information.
Added RLDRAM 3 information.
Added LRDIMM information.
June 2012
5.0
•
•
•
Added LPDDR2 information.
Added Cyclone V information.
Added Feedback icon.
November 2011
4.0
•
Moved and reorganized Planning Pin and
Resource section to Volume 2:Design
Guidelines.
Added Additional Guidelines for Arria V GZ and
Stratix V Devices section.
Added Arria V and Cyclone V information.
•
•
June 2011
3.0
•
•
•
Moved Select a Device and Memory IP
Planning chapters to Volume 1.
Added information about interface pins.
Added guidelines for using PLL.
Added a new section on controller efficiency.
Added Arria II GX and Stratix V information.
December 2010
2.1
•
•
July 2010
2.0
Updated information about UniPHY-based
interfaces and Stratix V devices.
April 2010
1.0
Initial release.
External Memory Interface Handbook Volume 2: Design Guidelines
82
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
The following topics provide guidelines for improving the signal integrity of your
system and for successfully implementing a DDR2, DDR3, or DDR4 SDRAM interface
on your system.
The following areas are discussed:
•
comparison of various types of termination schemes, and their effects on the
signal quality on the receiver
•
proper drive strength setting on the FPGA to optimize the signal integrity at the
receiver
•
effects of different loading types, such as components versus DIMM configuration,
on signal quality
It is important to understand the trade-offs between different types of termination
schemes, the effects of output drive strengths, and different loading types, so that
you can swiftly navigate through the multiple combinations and choose the best
possible settings for your designs.
The following key factors affect signal quality at the receiver:
•
Leveling and dynamic ODT
•
Proper use of termination
•
Layout guidelines
As memory interface performance increases, board designers must pay closer
attention to the quality of the signal seen at the receiver because poorly transmitted
signals can dramatically reduce the overall data-valid margin at the receiver. The
following figure shows the differences between an ideal and real signal seen by the
receiver.
VIH
Voltage
Ideal and Real Signal at the Receiver
Voltage
Figure 12.
VIH
VIL
Ideal
Time
VIL
Real
Time
Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus
and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other
countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in
accordance with Intel's standard warranty, but reserves the right to make changes to any products and services
at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any
information, product, or service described herein except as expressly agreed to in writing by Intel. Intel
customers are advised to obtain the latest version of device specifications before relying on any published
information and before placing orders for products or services.
*Other names and brands may be claimed as the property of others.
ISO
9001:2008
Registered
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2.1 Leveling and Dynamic Termination
DDR3 and DDR4 SDRAM DIMMs, as specified by JEDEC, always use a fly-by topology
for the address, command, and clock signals.
Intel recommends that for full DDR3 or DDR4 SDRAM compatibility when using
discrete DDR3 or DDR4 SDRAM components, you should mimic the JEDEC DDR3 or
DDR4 fly-by topology on your custom printed circuit boards (PCB).
Note:
Arria® II, Arria V GX, Arria V GT, Arria V SoC, Cyclone® V, and Cyclone V SoC devices
do not support DDR3 SDRAM with read or write leveling, so these devices do not
support standard DDR3 SDRAM DIMMs or DDR3 SDRAM components using the
standard DDR3 SDRAM fly-by address, command, and clock layout topology.
Table 25.
Device Family Topology Support
Device
I/O Support
Arria II
Non-leveling
Arria V GX, Arria V GT, Arria V SoC
Non-leveling
Arria V GZ
Leveling
Cyclone V GX, Cyclone V GT, Cyclone V SoC
Non-leveling
Stratix III
Leveling
Stratix IV
Leveling
Stratix V
Leveling
Arria 10
Leveling
Stratix 10
Leveling
Related Links
www.JEDEC.org
2.1.1 Read and Write Leveling
A major difference between DDR2 and DDR3/DDR4 SDRAM is the use of leveling. To
improve signal integrity and support higher frequency operations, the JEDEC
committee defined a fly-by termination scheme used with clocks, and command and
address bus signals.
Note:
This section describes read and write leveling in terms of a comparison between DDR3
and DDR2. Leveling in DDR4 is fundamentally similar to DDR3. Refer to the DDR4
JEDEC specifications for more information.
The following section describes leveling in DDR3, and is equally applicable to DDR4.
Fly-by topology reduces simultaneous switching noise (SSN) by deliberately causing
flight-time skew between the data and strobes at every DRAM as the clock, address,
and command signals traverse the DIMM, as shown in the following figure.
External Memory Interface Handbook Volume 2: Design Guidelines
84
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 13.
DDR3 DIMM Fly-By Topology Requiring Write Leveling
Command, Address, Clock in
“Flyby” topology in DDR3 DIMM
VTT
Data Skew
Data Skew Calibrated Out at Power Up with Write Leveling
The flight-time skew caused by the fly-by topology led the JEDEC committee to
introduce the write leveling feature on the DDR3 SDRAMs. Controllers must
compensate for this skew by adjusting the timing per byte lane.
During a write, DQS groups launch at separate times to coincide with a clock arriving
at components on the DIMM, and must meet the timing parameter between the
memory clock and DQS defined as tDQSS of ± 0.25 tCK.
During the read operation, the memory controller must compensate for the delays
introduced by the fly-by topology. The Stratix® III, Stratix IV, and Stratix V FPGAs
have alignment and synchronization registers built in the I/O element to properly
capture the data.
In DDR2 SDRAM, there are only two drive strength settings, full or reduced, which
correspond to the output impedance of 18-ohm and 40-ohm, respectively. These
output drive strength settings are static settings and are not calibrated; consequently,
the output impedance varies as the voltage and temperature drifts.
The DDR3 SDRAM uses a programmable impedance output buffer. There are two drive
strength settings, 34-ohmand 40-ohm . The 40-ohm drive strength setting is
currently a reserved specification defined by JEDEC, but available on the DDR3
SDRAM, as offered by some memory vendors. Refer to the data sheet of the
respective memory vendors for more information about the output impedance setting.
You select the drive strength settings by programming the memory mode register
defined by mode register 1 (MR1). To calibrate output driver impedance, an external
precision resistor, RZQ, connects the ZQ pin and VSSQ. The value of this resistor must
be 240-ohm ± 1%.
If you are using a DDR3 SDRAM DIMM, RZQ is soldered on the DIMM so you do not
need to layout your board to account for it. Output impedance is set during
initialization. To calibrate output driver impedance after power-up, the DDR3 SDRAM
needs a calibration command that is part of the initialization and reset procedure and
is updated periodically when the controller issues a calibration command.
External Memory Interface Handbook Volume 2: Design Guidelines
85
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
In addition to calibrated output impedance, the DDR3 SDRAM also supports calibrated
parallel ODT through the same external precision resistor, RZQ, which is possible by
using a merged output driver structure in the DDR3 SDRAM, which also helps to
improve pin capacitance in the DQ and DQS pins. The ODT values supported in DDR3
SDRAM are 20-ohm , 30-ohm , 40-ohm , 60-ohm , and 120-ohm , assuming that RZQ
is 240-ohm.
Related Links
www.JEDEC.org
2.1.2 Dynamic ODT
Dynamic ODT is a feature in DDR3 and DDR4 SDRAM that is not available in DDR2
SDRAM. Dynamic ODT can change the ODT setting without issuing a mode register set
(MRS) command.
Note:
This topic highlights the dynamic ODT feature in DDR3. To learn about dynamic ODT in
DDR4, refer to the JEDEC DDR4 specifications.
When you enable dynamic ODT, and there is no write operation, the DDR3 SDRAM
terminates to a termination setting of RTT_NOM; when there is a write operation, the
DDR3 SDRAM terminates to a setting of RTT_WR. You can preset the values of
RTT_NOM and RTT_WR by programming the mode registers, MR1 and MR2.
The following figure shows the behavior of ODT when you enable dynamic ODT.
Figure 14.
Dynamic ODT: Behavior with ODT Asserted Before and After the Write
In the multi-load DDR3 SDRAM configuration, dynamic ODT helps reduce the jitter at
the module being accessed, and minimizes reflections from any secondary modules.
For more information about using the dynamic ODT on DDR3 SDRAM, refer to the
application note by Micron, TN-41-04 DDR3 Dynamic On-Die Termination.
In addition to RTT_NOM and RTT_WR, DDR4 has RTT_PARK which applies a specified
termination value when the ODT signal is low.
Related Links
www.JEDEC.org
2.1.3 Dynamic On-Chip Termination
Dynamic OCT is available in Arria V, Arria 10, Cyclone V, Stratix III, Stratix IV,
Stratix V, and Stratix 10.
External Memory Interface Handbook Volume 2: Design Guidelines
86
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
The dynamic OCT scheme enables series termination (RS) and parallel termination
(RT) to be dynamically turned on and off during the data transfer. The series and
parallel terminations are turned on or off depending on the read and write cycle of the
interface. During the write cycle, the RS is turned on and the RT is turned off to match
the line impedance. During the read cycle, the RS is turned off and the RT is turned on
as the FPGA implements the far-end termination of the bus.
For more information about dynamic OCT, refer to the I/O features chapters in the
devices handbook for your Intel device.
2.1.3.1 FPGA Writing to Memory
The benefit of using dynamic series OCT is that when driver is driving the transmission
line, it “sees” a matched transmission line with no external resistor termination.
The following figure shows dynamic series OCT scheme when the FPGA is writing to
the memory.
Figure 15.
Dynamic Series OCT Scheme with ODT on the Memory
FPGA
DIMM
Component
50
Driver
100
50 w
RS
150
3” Trace Length
Receiver
Driver
Receiver
100
150
Refer to the memory vendors when determining the over- and undershoot. They
typically specify a maximum limit on the input voltage to prevent reliability issues.
2.1.3.2 FPGA Reading from Memory
The following figure shows the dynamic parallel termination scheme when the FPGA is
reading from memory.
When the SDRAM DIMM is driving the transmission line, the ringing and reflection is
minimal because the FPGA-side termination 50-ohm pull-up resistor is matched with
the transmission line.
Figure 16.
Dynamic Parallel OCT Scheme with Memory-Side Series Resistor
FPGA
DIMM Full Strength
DRAM Component
100 Ohm
Driver
50 Ohm
3” Trace Length
Receiver
RS
Driver
Receiver
100 Ohm
2.1.4 Dynamic On-Chip Termination in Stratix III and Stratix IV Devices
Stratix III and Stratix IV devices support on-off dynamic series and parallel
termination for a bidirectional I/O in all I/O banks. Dynamic OCT is a new feature in
Stratix III and Stratix IV FPGA devices.
External Memory Interface Handbook Volume 2: Design Guidelines
87
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
You enable dynamic parallel termination only when the bidirectional I/O acts as a
receiver and disable it when the bidirectional I/O acts as a driver. Similarly, you enable
dynamic series termination only when the bidirectional I/O acts as a driver and is
disable it when the bidirectional I/O acts as a receiver. The default setting for dynamic
OCT is series termination, to save power when the interface is idle—no active reads or
writes.
Note:
The dynamic control operation of the OCT is separate to the output enable signal for
the buffer. UniPHY IP can enable parallel OCT only during read cycles, saving power
when the interface is idle.
Figure 17.
Dynamic OCT Between Stratix III and Stratix IV FPGA Devices
FPG A
DDR3 DIMM
DDR3 Comp one nt
50 Ω
34 W
Drive r
Drive r
100 W
R S = 15 Ω
0Ω
3" Tra ce Le ngth
VREF = 0.75 V
Re ce ive r
VREF = 0.75 V
Re ce ive r
100 W
FPG A
DDR3 DIMM
DDR3 Comp one nt
34 Ω
50 W
Drive r
Drive r
100 Ω
R S = 15 Ω
0Ω
3" Tra ce Le ngth
VREF = 0.75 V
Re ce ive r
100 Ω
VREF = 0.75 V
Re ce ive r
Dynamic OCT is useful for terminating any high-performance bidirectional path
because signal integrity is optimized depending on the direction of the data. In
addition, dynamic OCT also eliminates the need for external termination resistors
when used with memory devices that support ODT (such as DDR3 SDRAM), thus
reducing cost and easing board layout.
However, dynamic OCT in Stratix III and Stratix IV FPGA devices is different from
dynamic ODT in DDR3 SDRAM mentioned in previous sections and these features
should not be assumed to be identical.
For detailed information about the dynamic OCT feature in the Stratix III FPGA, refer
to the Stratix III Device I/O Features chapter in volume 1 of the Stratix III Device
Handbook.
For detailed information about the dynamic OCT feature in the Stratix IV FPGA, refer
to the I/O Features in Stratix IV Devices chapter in volume 1 of the Stratix IV Device
Handbook.
Related Links
•
Stratix III Device I/O Features
•
I/O Features in Stratix IV Devices
External Memory Interface Handbook Volume 2: Design Guidelines
88
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2.1.5 Dynamic OCT in Stratix V Devices
Stratix V devices also support the dynamic OCT feature and provide more flexibility.
Stratix V OCT calibration uses one RZQ pin that exists in every OCT block.
You can use any one of the following as a reference resistor on the RZQ pin to
implement different OCT values:
•
240-ohm reference resistor—to implement RS OCT of 34-ohm, 40-ohm, 48-ohm,
60-ohm, and 80-ohm; and RT OCT resistance of 20-ohm, 30-ohm, 40-ohm, and
120-ohm
•
100-ohm reference resistor—to implement RS OCT of 25-ohm and 50-ohm; and
RT OCT resistance of 50-ohm
For detailed information about the dynamic OCT feature in the Stratix V FPGA, refer to
the I/O Features in Stratix V Devices chapter in volume 1 of the Stratix V Device
Handbook.
Related Links
I/O Features in Stratix V Devices
2.1.6 Dynamic On-Chip Termination (OCT) in Arria 10 and Stratix 10
Devices
Depending upon the Rs (series) and Rt (parallel) OCT values that you want, you
should choose appropriate values for the RZQ resistor and connect this resistor to the
RZQ pin of the FPGA.
•
Select a 240-ohm reference resistor to ground to implement Rs OCT values of 34ohm, 40-ohm, 48-ohm, 60-ohm, and 80-ohm, and Rt OCT resistance values of 20ohm, 30-ohm, 34-ohm, 40-ohm, 60-ohm, 80-ohm, 120-ohm and 240 ohm.
•
Select a 100-ohm reference resistor to ground to implement Rs OCT values of 25ohm and 50-ohm, and an RT OCT resistance of 50-ohm.
Check the FPGA I/O tab of the parameter editor to determine the I/O standards and
termination values supported for data, address and command, and memory clock
signals.
2.2 DDR2 Terminations and Guidelines
This section provides information for DDR2 SDRAM interfaces.
2.2.1 Termination for DDR2 SDRAM
DDR2 adheres to the JEDEC standard of governing Stub-Series Terminated Logic
(SSTL), JESD8-15a, which includes four different termination schemes.
Two commonly used termination schemes of SSTL are:
•
Single parallel terminated output load with or without series resistors (Class I, as
stated in JESD8-15a)
•
Double parallel terminated output load with or without series resistors (Class II, as
stated in JESD8-15a)
External Memory Interface Handbook Volume 2: Design Guidelines
89
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Depending on the type of signals you choose, you can use either termination scheme.
Also, depending on your design’s FPGA and SDRAM memory devices, you may choose
external or internal termination schemes.
To reduce system cost and simplify printed circuit board layout, you may choose not to
have any parallel termination on the transmission line, and use point-to-point
connections between the memory interface and the memory. In this case, you may
take advantage of internal termination schemes such as on-chip termination (OCT) on
the FPGA side and on-die termination (ODT) on the SDRAM side when it is offered on
your chosen device.
Related Links
DDR3 Terminations in Arria V, Cyclone V, Stratix III, Stratix IV, and Stratix V on page
99
DDR3 DIMMs have terminations on all unidirectional signals, such as memory
clocks, and addresses and commands; thus eliminating the need for them on the
FPGA PCB.
2.2.1.1 External Parallel Termination
If you use external termination, you must study the locations of the termination
resistors to determine which topology works best for your design.
The following two figures illustrate the most common termination topologies: fly-by
topology and non-fly-by topology, respectively.
Figure 18.
Fly-By Placement of a Parallel Resistor
VTT
RT = 50 w
Board Trace
FPGA Driver
Board Trace
DDR2 SDRAM
DIMM
(Receiver)
With fly-by topology, you place the parallel termination resistor after the receiver. This
termination placement resolves the undesirable unterminated stub found in the
non-fly-by topology. However, using this topology can be costly and complicate
routing.
External Memory Interface Handbook Volume 2: Design Guidelines
90
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 19.
Non-Fly-By Placement of a Parallel Resistor
VTT
RT = 50 w
FPGA Driver
DDR2 SDRAM
DIMM
(Receiver)
With non-fly-by topology, the parallel termination resistor is placed between the driver
and receiver (closest to the receiver). This termination placement is easier for board
layout, but results in a short stub, which causes an unterminated transmission line
between the terminating resistor and the receiver. The unterminated transmission line
results in ringing and reflection at the receiver.
If you do not use external termination, DDR2 offers ODT and Intel FPGAs have varying
levels of OCT support. You should explore using ODT and OCT to decrease the board
power consumption and reduce the required board space.
2.2.1.2 On-Chip Termination
OCT technology is offered on Arria II GX, Arria II GZ, Arria V, Arria 10, Cyclone V, MAX
10, Stratix III, Stratix IV, and Stratix V devices.
The following table summarizes the extent of OCT support for devices earlier than
Arria 10. This table provides information about SSTL-18 standards because SSTL-18 is
the supported standard for DDR2 memory interface by Intel FPGAs.
For Arria II, Stratix III and Stratix IV devices, on-chip series (RS) termination is
supported only on output and bidirectional buffers. The value of RS with calibration is
calibrated against a 25-ohm resistor for class II and 50-ohm resistor for class I
connected to RUP and RDN pins and adjusted to ± 1% of 25-ohm or 50-ohm . On-chip
parallel (RT) termination is supported only on inputs and bidirectional buffers. The
value of RT is calibrated against 100-ohm connected to the RUP and RDN pins.
Calibration occurs at the end of device configuration. Dynamic OCT is supported only
on bidirectional I/O buffers.
For Arria V, Cyclone V, and Stratix V devices, RS and RT values are calibrated against
the on-board resistor RZQ. If you want 25 or 50 ohm values for your RS and RT, you
must connect a 100 ohm resistor with a tolerance of +/-1% to the RZQ pin .
For more information about on-chip termination, refer to the device handbook for the
device that you are using.
External Memory Interface Handbook Volume 2: Design Guidelines
91
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Table 26.
Terminati
on
Scheme
On-Chip Termination Schemes
SSTL-18
Arria II
GX
Arria II
GZ
Arria V
Cyclone
V
MAX 10
Stratix II
I and
Stratix IV
Stratix V
(1)
Column
and Row
I/O
Column
and Row
I/O
Column
and Row
I/O
Column
and Row
I/O
Column and
Row I/O
Column
and Row
I/O
Column
I/O
Class I
50
50
50
50
50
50
50
Class II
25
25
25
25
25
25
25
Class I
50
50
50
50
50
50
50
Class II
25
25
25
25
25
25
25
Class I and
Class II
—
50
50
50
—
50
50
On-Chip
Series
Terminatio
n without
Calibration
On-Chip
Series
Terminatio
n with
Calibration
On-Chip
Parallel
Terminatio
n with
Calibration
FPGA Device
Note to Table:
1. Row I/O is not available for external memory interfaces in Stratix V devices.
2.2.1.3 Recommended Termination Schemes
The following table provides the recommended termination schemes for major DDR2
memory interface signals.
Signals include data (DQ), data strobe (DQS/DQSn), data mask (DM), clocks
(mem_clk/mem_clk_n), and address and command signals.
When interfacing with multiple DDR2 SDRAM components where the address,
command, and memory clock pins are connected to more than one load, follow these
steps:
1.
Simulate the system to get the new slew-rate for these signals.
2.
Use the derated tIS and tIH specifications from the DDR2 SDRAM data sheet
based on the simulation results.
3.
If timing deration causes your interface to fail timing requirements, consider signal
duplication of these signals to lower their loading, and hence improve timing.
Note:
Intel uses Class I and Class II termination in this table to refer to drive strength, and
not physical termination.
Note:
You must simulate your design for your system to ensure correct operation.
External Memory Interface Handbook Volume 2: Design Guidelines
92
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Table 27.
(1)
Termination Recommendations
Device Family
Signal Type
SSTL 18 IO
Standard (2) (3)
(4) (5) (6)
FPGA-End
Discrete
Termination
Memory-End
Termination 1
(Rank/DIMM)
Memory I/O
Standard
Arria II GX
DDR2 component
Class I R50 CAL
50-ohm Parallel to
VTT discrete
ODT75
(7)
HALF
(8)
DIFF Class R50
CAL
50-ohm Parallel to
VTT discrete
ODT75
(7)
HALF
(8)
Class I R50 CAL
50-ohm Parallel to
VTT discrete
ODT75
(7)
HALF
(8)
DM
Class I R50 CAL
N/A
ODT75
(7)
N/A
Address and
command
Class I MAX
N/A
56-ohm parallel to
VTT discrete
N/A
Clock
DIFF Class I R50
CAL
N/A
×1 = 100-ohm
differential (10)
×2 = 200-ohm
differential (11)
N/A
DQ
Class I R50 CAL
50-ohm Parallel to
VTT discrete
ODT75
(7)
FULL
(9)
DIFF Class I R50
CAL
50-ohm Parallel to
VTT discrete
ODT75
(7)
FULL
(9)
Class I R50 CAL
50-ohm Parallel to
VTT discrete
ODT75
(7)
FULL
(9)
DM
Class I R50 CAL
N/A
ODT75
(7)
N/A
Address and
command
Class I MAX
N/A
56-ohm parallel to
VTT discrete
N/A
Clock
DIFF Class I R50
CAL
N/A
N/A = on DIMM
N/A
Class I R50/P50
DYN CAL
N/A
ODT75
(7)
HALF
(8)
DIFF Class I
R50/P50 DYN CAL
N/A
ODT75
(7)
HALF
(8)
Class I R50/P50
DYN CAL
N/A
ODT75
(7)
HALF
(8)
DM
Class I R50 CAL
N/A
ODT75
(7)
N/A
Address and
command
Class I MAX
N/A
56-ohm parallel to
VTT discrete
N/A
Clock
DIFF Class I R50
NO CAL
N/A
×1 = 100-ohm
differential (10)
×2 = 200-ohm
differential (11)
N/A
DQ
Class I R50/P50
DYN CAL
N/A
ODT75
(7)
FULL
(9)
DIFF Class I
R50/P50 DYN CAL
N/A
ODT75
(7)
FULL
(9)
DQ
DQS DIFF
DQS SE
DDR2 DIMM
(12)
DQS DIFF
DQS SE
(13)
(13)
(12)
Arria V and Cyclone V
DDR2 component
DQ
DQS DIFF
DQS SE
DDR2 DIMM
(13)
(12)
DQS DIFF
(13)
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
93
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Device Family
Signal Type
SSTL 18 IO
Standard (2) (3)
(4) (5) (6)
Memory-End
Termination 1
(Rank/DIMM)
Memory I/O
Standard
Class I R50/P50
DYN CAL
N/A
ODT75
(7)
FULL
DM
Class I R50 CAL
N/A
ODT75
(7)
N/A
Address and
command
Class I MAX
N/A
56-ohm parallel to
VTT discrete
N/A
Clock
DIFF Class I R50
NO CAL
N/A
N/A = on DIMM
N/A
Class I R50/P50
DYN CAL
N/A
ODT75
(7)
HALF
(8)
DIFF Class I
R50/P50 DYN CAL
N/A
ODT75
(7)
HALF
(8)
DIFF Class I
R50/P50 DYN CAL
N/A
ODT75
(7)
HALF
(8)
DM
Class I R50 CAL
N/A
ODT75
(7)
N/A
Address and
command
Class I MAX
N/A
56-ohm Parallel to
VTT discrete
N/A
Clock
DIFF Class I R50
NO CAL
N/A
x1 = 100-ohm
differential (10)
x2 = 200-ohm
differential (11)
N/A
DQ
Class I R50/P50
DYN CAL
N/A
ODT75
(7)
FULL
(9)
DIFF Class I
R50/P50 DYN CAL
N/A
ODT75
(7)
FULL
(9)
Class I R50/P50
DYN CAL
N/A
ODT75
(7)
FULL
(9)
DM
Class I R50 CAL
N/A
ODT75
(7)
N/A
Address and
command
Class I MAX
N/A
56-ohm Parallel to
VTT discrete
N/A
Clock
DIFF Class I R50
NO CAL
N/A
N/A = on DIMM
N/A
DQ/DQS
Class I 12 mA
50-ohm Parallel to
VTT discrete
ODT75
DM
Class I 12 mA
N/A
N/A
Address and
command
Class I MAX
N/A
80-ohm Parallel to
VTT discrete
Clock
Class I 12 mA
N/A
x1 = 100-ohm
differential (10)
N/A
DQS SE
(12)
FPGA-End
Discrete
Termination
(9)
Arria II GZ, Stratix III, Stratix IV, and Stratix V
DDR2 component
DQ
DQS DIFF
DQS SE
DDR2 DIMM
(12)
DQS DIFF
DQS SE
(13)
(13)
(12)
MAX 10
DDR2 component
(7)
HALF
(8)
N/A
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
94
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Device Family
Signal Type
SSTL 18 IO
Standard (2) (3)
(4) (5) (6)
FPGA-End
Discrete
Termination
Memory-End
Termination 1
(Rank/DIMM)
Memory I/O
Standard
x2 = 200-ohm
differential (11)
Notes to Table:
1. N/A is not available.
2. R is series resistor.
3. P is parallel resistor.
4. DYN is dynamic OCT.
5. NO CAL is OCT without calibration.
6. CAL is OCT with calibration.
7. ODT75 vs. ODT50 on the memory has the effect of opening the eye more, with a limited increase in overshoot/
undershoot.
8. HALF is reduced drive strength.
9. FULL is full drive strength.
10.x1 is a single-device load.
11.x2 is two-device load. For example, you can feed two out of nine devices on a single rank DIMM with a single clock pair
—except for MAX 10, which doesn't support DIMMs.
12.DQS SE is single-ended DQS.
13.DQS DIFF is differential DQS
2.2.2 DDR2 Design Layout Guidelines
The general layout guidelines in the following topic apply to DDR2 SDRAM interfaces.
These guidelines will help you plan your board layout, but are not meant as strict rules
that must be adhered to. Intel recommends that you perform your own board-level
simulations to ensure that the layout you choose for your board allows you to achieve
your desired performance.
For more information about how the memory manufacturers route these address and
control signals on their DIMMs, refer to the Cadence PCB browser from the Cadence
website, at www.cadence.com. The various JEDEC example DIMM layouts are available
from the JEDEC website, at www.jedec.org.
For more information about board skew parameters, refer to Board Skews in the
Implementing and Parameterizing Memory IP chapter. For assistance in calculating
board skew parameters, refer to the board skew calculator tool, which is available at
the Intel website.
Note:
1.
The following layout guidelines include several +/- length based rules. These
length based guidelines are for first order timing approximations if you cannot
simulate the actual delay characteristic of the interface. They do not include any
margin for crosstalk.
2.
To ensure reliable timing closure to and from the periphery of the device, signals
to and from the periphery should be registered before any further logic is
connected.
Intel recommends that you get accurate time base skew numbers for your design
when you simulate the specific implementation.
Related Links
http://www.jedec.org/download/DesignFiles/DDR2/default1.cfm
External Memory Interface Handbook Volume 2: Design Guidelines
95
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2.2.3 General Layout Guidelines
The following table lists general board design layout guidelines. These guidelines are
Intel recommendations, and should not be considered as hard requirements. You
should perform signal integrity simulation on all the traces to verify the signal integrity
of the interface. You should extract the slew rate and propagation delay information,
enter it into the IP and compile the design to ensure that timing requirements are
met.
Table 28.
General Layout Guidelines
Parameter
Impedance
Guidelines
•
•
Decoupling Parameter
•
•
•
•
•
Power
•
•
•
•
General Routing
All unused via pads must be removed, because they cause unwanted
capacitance.
Trace impedance plays an important role in the signal integrity. You must
perform board level simulation to determine the best characteristic impedance
for your PCB. For example, it is possible that for multi rank systems 40 ohms
could yield better results than a traditional 50 ohm characteristic impedance.
Use 0.1 uF in 0402 size to minimize inductance
Make VTT voltage decoupling close to termination resistors
Connect decoupling caps between VTT and ground
Use a 0.1 uF cap for every other VTT pin and 0.01 uF cap for every VDD and
VDDQ pin
Verify the capacitive decoupling using the Intel Power Distribution Network
Design Tool
Route GND and VCC as planes
Route VCCIO for memories in a single split plane with at least a 20-mil
(0.020 inches, or 0.508 mm) gap of separation
Route VTT as islands or 250-mil (6.35-mm) power traces
Route oscillators and PLL power as islands or 100-mil (2.54-mm) power traces
All specified delay matching requirements include PCB trace delays, different layer
propagation velocity variance, and crosstalk. To minimize PCB layer propogation
variance, Intel recommends that signals from the same net group always be
routed on the same layer.
• Use 45° angles (not 90° corners)
• Avoid T-Junctions for critical nets or clocks
• Avoid T-junctions greater than 250 mils (6.35 mm)
• Disallow signals across split planes
• Restrict routing other signals close to system reset signals
• Avoid routing memory signals closer than 0.025 inch (0.635 mm) to PCI or
system clocks
Related Links
Power Distribution Network Design Tool
2.2.4 Layout Guidelines for DDR2 SDRAM Interface
Unless otherwise specified, the following guidelines apply to the following topologies:
•
DIMM—UDIMM topology
•
DIMM—RDIMM topology
•
Discrete components laid out in UDIMM topology
•
Discrete components laid out in RDIMM topology
External Memory Interface Handbook Volume 2: Design Guidelines
96
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Trace lengths for CLK and DQS should tightly match for each memory component. To
match the trace lengths on the board, a balanced tree topology is recommended for
clock and address and command signal routing. In addition to matching the trace
lengths, you should ensure that DDR timing is passing in the Report DDR Timing
report. For Stratix devices, this timing is shown as Write Leveling tDQSS timing. For
Arria and Cyclone devices, this timing is shown as CK vs DQS timing
For a table of device family topology support, refer to Leveling and Dynamic ODT.
The following table lists DDR2 SDRAM layout guidelines. These guidelines are Intel
recommendations, and should not be considered as hard requirements. You should
perform signal integrity simulation on all the traces to verify the signal integrity of the
interface. You should extract the slew rate and propagation delay information, enter it
into the IP and compile the design to ensure that timing requirements are met.
Note:
The following layout guidelines also apply to DDR3 SDRAM without leveling interfaces.
Table 29.
DDR2 SDRAM Layout Guidelines
Parameter
(1)
Guidelines
DIMMs
If you consider a normal DDR2 unbuffered, unregistered DIMM, essentially you are
planning to perform the DIMM routing directly on your PCB. Therefore, each
address and control pin routes from the FPGA (single pin) to all memory devices
must be on the same side of the FPGA.
General Routing
•
•
Clock Routing
•
•
•
•
•
•
Address and Command Routing
All data, address, and command signals must have matched length traces
± 50 ps.
All signals within a given Byte Lane Group should be matched length with
maximum deviation of ±10 ps and routed in the same layer.
A 4.7 K-ohm resistor to ground is recommended for each Clock Enable signal.
You can place the resistor at either the memory end or the FPGA end of the
trace.
Route clocks on inner layers with outer-layer run lengths held to under 500
mils (12.7 mm)
These signals should maintain a10-mil (0.254 mm) spacing from other nets
Clocks should maintain a length-matching between clock pairs of ±5 ps.
Differential clocks should maintain a length-matching between P and N signals
of ±2 ps, routed in parallel.
Space between different pairs should be at least three times the space
between the differential pairs and must be routed differentially (5-mil trace,
10-15 mil space on centers), and equal to the signals in the Address/
Command Group or up to 100 mils (2.54 mm) longer than the signals in the
Address/Command Group.
•
Trace lengths for CLK and DQS should closely match for each memory
component. To match trace lengths on the board, a balanced tree topology is
recommended for clock and address and command signal routing. For Stratix
device families, ensure that Write Leveling tDQSS is passing in the DDR timing
report; for Arria and Cyclone device families, verify that CK vs DQS timing is
passing in the DDR timing report.
•
Unbuffered address and command lines are more susceptible to cross-talk and
are generally noisier than buffered address or command lines. Therefore,
un-buffered address and command signals should be routed on a different
layer than data signals (DQ) and data mask signals (DM) and with greater
spacing.
•
Do not route differential clock (CK) and clock enable (CKE) signals close to
address signals.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
97
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Parameter
DQ, DM, and DQS Routing Rules
Guidelines
•
Keep the distance from the pin on the DDR2 DIMM or component to the
termination resistor pack (VTT) to less than 500 mils for DQS[x] Data Groups.
•
Keep the distance from the pin on the DDR2 DIMM or component to the
termination resistor pack (VTT) to less than 1000 mils for the ADR_CMD_CTL
Address Group.
•
Parallelism rules for the DQS[x] Data Groups are as follows:
— 4 mils for parallel runs < 0.1 inch (approximately 1× spacing relative to
plane distance)
— 5 mils for parallel runs < 0.5 inch (approximately 1× spacing relative to
plane distance)
— 10 mils for parallel runs between 0.5 and 1.0 inches (approximately 2×
spacing relative to plane distance)
— 15 mils for parallel runs between 1.0 and 6.0 inch (approximately 3×
spacing relative to plane distance)
•
•
•
•
Termination Rules
•
•
•
•
•
•
•
•
•
Parallelism rules for the ADR_CMD_CTL group and CLOCKS group are as
follows:
— 4 mils for parallel runs < 0.1 inch (approximately 1× spacing relative to
plane distance)
— 10 mils for parallel runs < 0.5 inch (approximately 2× spacing relative to
plane distance)
— 15 mils for parallel runs between 0.5 and 1.0 inches (approximately 3×
spacing relative to plane distance)
— 20 mils for parallel runs between 1.0 and 6.0 inches (approximately 4×
spacing relative to plane distance)
All signals are to maintain a 20-mil separation from other, non-related nets.
All signals must have a total length of < 6 inches.
Trace lengths for CLK and DQS should closely match for each memory
component. To match trace lengths on the board, a balanced tree topology is
recommended for clock and address and command signal routing. For Stratix
device families, ensure that Write Leveling tDQSS is passing in the DDR timing
report; for Arria and Cyclone device families, verify that CK vs DQS timing is
passing in the DDR timing report.
When pull-up resistors are used, fly-by termination configuration is
recommended. Fly-by helps reduce stub reflection issues.
Pull-ups should be within 0.5 to no more than 1 inch.
Pull up is typically 56-ohms.
If using resistor networks:
Do not share R-pack series resistors between address/command and data lines
(DQ, DQS, and DM) to eliminate crosstalk within pack.
Series and pull up tolerances are 1–2%.
Series resistors are typically 10 to 20-ohm.
Address and control series resistor typically at the FPGA end of the link.
DM, DQS, DQ series resistor typically at the memory end of the link (or just
before the first DIMM).
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
98
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Parameter
Guidelines
Quartus Prime Software Settings for
Board Layout
•
•
•
If termination resistor packs are used:
The distance to your memory device should be less than 750 mils.
The distance from your FPGA device should be less than 1250 mils.
•
To perform timing analyses on board and I/O buffers, use third party
simulation tool to simulate all timing information such as skew, ISI, crosstalk,
and type the simulation result into the UniPHY board setting panel.
Do not use advanced I/O timing model (AIOT) or board trace model unless you
do not have access to any third party tool. AIOT provides reasonable accuracy
but tools like HyperLynx provides better result. In operations with higher
frequency, it is crucial to properly simulate all signal integrity related
uncertainties.
The Quartus Prime software does timing check to find how fast the controller
issues a write command after a read command, which limits the maximum
length of the DQ/DQS trace. Check the turnaround timing in the Report DDR
timing report and ensure the margin is positive before board fabrication.
Functional failure happens if the margin is less than 0.
•
•
Note to Table:
1. For point-to-point and DIMM interface designs, refer to the Micron website, www.micron.com.
Figure 20.
Balanced Tree Topology
Memory
Component
Memory
Component
CK0
DQ Group 0
CK1
CK
Memory
Component
CKi
DQ Group 1
DQSi
DQ Group i
FPGA
CKi = Clock signal propagation delay to device i
DQSi = DQ/DQS signals propagation delay to group i
Related Links
•
External Memory Interface Spec Estimator
•
www.micron.com
•
Leveling and Dynamic Termination on page 84
DDR3 and DDR4 SDRAM DIMMs, as specified by JEDEC, always use a fly-by
topology for the address, command, and clock signals.
2.3 DDR3 Terminations in Arria V, Cyclone V, Stratix III, Stratix IV,
and Stratix V
DDR3 DIMMs have terminations on all unidirectional signals, such as memory clocks,
and addresses and commands; thus eliminating the need for them on the FPGA PCB.
In addition, using the ODT feature on the DDR3 SDRAM and the dynamic OCT feature
External Memory Interface Handbook Volume 2: Design Guidelines
99
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
of Stratix III, Stratix IV, and Stratix V FPGAs completely eliminates any external
termination resistors; thus simplifying the layout for the DDR3 SDRAM interface when
compared to that of the DDR2 SDRAM interface.
The following topics describe the correct way to terminate a DDR3 SDRAM interface
together with Stratix III, Stratix IV, and Stratix V FPGA devices.
Note:
If you are using a DDR3 SDRAM without leveling interface, refer to “Board Termination
for DDR2 SDRAM”. Note also that Arria V and Cyclone V devices do not support DDR3
with leveling.
Related Links
Termination for DDR2 SDRAM on page 89
DDR2 adheres to the JEDEC standard of governing Stub-Series Terminated Logic
(SSTL), JESD8-15a, which includes four different termination schemes.
2.3.1 Terminations for Single-Rank DDR3 SDRAM Unbuffered DIMM
The most common implementation of the DDR3 SDRAM interface is the unbuffered
DIMM (UDIMM). You can find DDR3 SDRAM UDIMMs in many applications, especially in
PC applications.
The following table lists the recommended termination and drive strength setting for
UDIMM and Stratix III, Stratix IV, and Stratix V FPGA devices.
Note:
These settings are just recommendations for you to get started. Simulate with real
board and try different settings to get the best SI.
Table 30.
Drive Strength and ODT Setting Recommendations for Single-Rank UDIMM
Signal Type
SSTL 15 I/O
Standard (1)
FPGA End On-Board
Termination (2)
Memory End
Termination for
Write
Memory Driver
Strength for Read
DQ
Class I R50C/G50C
—
60-ohm ODT
(4)
40-ohm
(4)
DQS
Differential Class I
R50C/G50C (3)
—
60-ohm ODT
(4)
40-ohm
(4)
DM
Class I R50C
—
60-ohm ODT
(4)
40-ohm
(4)
Address and
Command
Class I with maximum
drive strength
—
39-ohm on-board termination to VDD
CK/CK#
Differential Class I
R50C
—
On-board (5)
2.2 pf compensation cap before the first
component; 36-ohm termination to VDD for
each arm (72-ohm differential); add 0.1 uF just
before VDD.
(3)
(3)
(5)
Notes to Table:
1. UniPHY IP automatically implements these settings.
2. Intel recommends that you use dynamic on-chip termination (OCT) for Stratix III and Stratix IV device families.
3. R50C is series with calibration for write, G50C is parallel 50 with calibration for read.
4. You can specify these settings in the parameter editor.
5. For DIMM, these settings are already implemented on the DIMM card; for component topology, Intel recommends that
you mimic termination scheme on the DIMM card on your board.
External Memory Interface Handbook Volume 2: Design Guidelines
100
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
You can implement a DDR3 SDRAM UDIMM interface in several permutations, such as
single DIMM or multiple DIMMs, using either single-ranked or dual-ranked UDIMMs. In
addition to the UDIMM’s form factor, these termination recommendations are also valid
for small-outline (SO) DIMMs and MicroDIMMs.
2.3.2 Terminations for Multi-Rank DDR3 SDRAM Unbuffered DIMM
You can implement a DDR3 SDRAM UDIMM interface in several permutations, such as
single DIMM or multiple DIMMs, using either single-ranked or dual-ranked UDIMMs. In
addition to the UDIMM’s form factor, these termination recommendations are also valid
for small-outline (SO) DIMMs and MicroDIMMs.
The following table lists the different permutations of a two-slot DDR3 SDRAM
interface and the recommended ODT settings on both the memory and controller
when writing to memory.
Table 31.
DDR3 SDRAM ODT Matrix for Writes
Slot 1
DR
SR
Slot 2
DR
SR
Write To
Controller
OCT (3)
(1) (2)
Slot 1
Rank 1
Slot 1
Series 50ohm
120-ohm
Slot 2
Series 50ohm
ODT off
Slot 1
Series 50ohm
120-ohm
Slot 2
Series 50ohm
40-ohm
(4)
Slot 2
Rank 2
ODT off
40-ohm
(4)
(4)
(4)
Rank 1
ODT off
(4)
Rank 2
40-ohm
120-ohm
(4)
(4)
Unpopulated
40-ohm
Unpopulated
120-ohm
ODT off
Unpopulated
(4)
(4)
ODT off
Unpopulated
Unpopulated
DR
Empty
Slot 1
Series 50ohm
120-ohm
Empty
DR
Slot 2
Series 50ohm
Unpopulated
Unpopulated
120-ohm
SR
Empty
Slot 1
Series 50ohm
120-ohm
(4)
Unpopulated
Unpopulated
Unpopulated
Empty
SR
Slot 2
Series 50ohm
Unpopulated
Unpopulated
120-ohm
(4)
Unpopulated
(4)
Unpopulated
ODT off
Notes to Table:
1. SR: single-ranked DIMM; DR: dual-ranked DIMM.
2. These recommendations are taken from the DDR3 ODT and Dynamic ODT session of the JEDEC DDR3 2007 Conference,
Oct 3-4, San Jose, CA.
3. The controller in this case is the FPGA.
4. Dynamic ODT is required. For example, the ODT of Slot 2 is set to the lower ODT value of 40-ohms when the memory
controller is writing to Slot 1, resulting in termination and thus minimizing any reflection from Slot 2. Without dynamic
ODT, Slot 2 will not be terminated.
The following table lists the different permutations of a two-slot DDR3 SDRAM
interface and the recommended ODT settings on both the memory and controller
when reading from memory.
External Memory Interface Handbook Volume 2: Design Guidelines
101
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Table 32.
DDR3 SDRAM ODT Matrix for Reads
Slot 1
DR
SR
Slot 2
DR
SR
Read From
Controller
OCT (3)
(1) (2)
Slot 1
Rank 1
Slot 2
Rank 2
Rank 1
Rank 2
Parallel 50ohm
ODT off
ODT off
Slot 2
Parallel 50ohm
ODT off
40-ohm
Slot 1
Parallel 50ohm
ODT off
Unpopulated
40-ohm
Slot 2
Parallel 50ohm
40-ohm
Unpopulated
ODT off
Unpopulated
(4)
(4)
ODT off
40-ohm
ODT off
ODT off
(4)
Slot 1
(4)
Unpopulated
DR
Empty
Slot 1
Parallel 50ohm
ODT off
ODT off
Unpopulated
Unpopulated
Empty
DR
Slot 2
Parallel 50ohm
Unpopulated
Unpopulated
ODT off
ODT off
SR
Empty
Slot 1
Parallel 50ohm
ODT off
Unpopulated
Unpopulated
Unpopulated
Empty
SR
Slot 2
Parallel 50ohm
Unpopulated
Unpopulated
ODT off
Unpopulated
Notes to Table:
1. SR: single-ranked DIMM; DR: dual-ranked DIMM.
2. These recommendations are taken from the DDR3 ODT and Dynamic ODT session of the JEDEC DDR3 2007 Conference,
Oct 3-4, San Jose, CA.
3. The controller in this case is the FPGA. JEDEC typically recommends 60-ohms, but this value assumes that the typical
motherboard trace impedance is 60-ohms and that the controller supports this termination. Intel recommends using a
50-ohm parallel OCT when reading from the memory.
2.3.3 Terminations for DDR3 SDRAM Registered DIMM
The difference between a registered DIMM (RDIMM) and a UDIMM is that the clock,
address, and command pins of the RDIMM are registered or buffered on the DIMM
before they are distributed to the memory devices. For a controller, each clock,
address, or command signal has only one load, which is the register or buffer. In a
UDIMM, each controller pin must drive a fly-by wire with multiple loads.
You do not need to terminate the clock, address, and command signals on your board
because these signals are terminated at the register. However, because of the register,
these signals become point-to-point signals and have improved signal integrity making
the drive strength requirements of the FPGA driver pins more relaxed. Similar to the
signals in a UDIMM, the DQS, DQ, and DM signals on a RDIMM are not registered. To
terminate these signals, refer to “DQS, DQ, and DM for DDR3 SDRAM UDIMM”.
2.3.4 Terminations for DDR3 SDRAM Load-Reduced DIMM
RDIMM and LRDIMM differ in that DQ, DQS, and DM signals are registered or buffered
in the LRDIMM. The LRDIMM buffer IC is a superset of the RDIMM buffer IC. The buffer
IC isolates the memory interface signals from loading effects of the memory chip.
Reduced electrical loading allows a system to operate at higher frequency and higher
density.
External Memory Interface Handbook Volume 2: Design Guidelines
102
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Note:
If you want to use your DIMM socket for UDIMM and RDIMM/LRDIMM, you must
create the necessary redundant connections on the board from the FPGA to the DIMM
socket. For example, the number of chip select signals required for a single-rank
UDIMM is one, but for single-rank RDIMM the number of chip selects required is two.
RDIMM and LRDIMM have parity signals associated with the address and command
bus which UDIMM does not have. Consult the DIMM manufacturer’s data sheet for
detailed information about the necessary pin connections for various DIMM topologies.
2.3.5 Terminations for DDR3 SDRAM Components With Leveling
The following topics discusses terminations used to achieve optimum performance for
designing the DDR3 SDRAM interface using discrete DDR3 SDRAM components.
In addition to using DDR3 SDRAM DIMM to implement your DDR3 SDRAM interface,
you can also use DDR3 SDRAM components. However, for applications that have
limited board real estate, using DDR3 SDRAM components reduces the need for a
DIMM connector and places components closer, resulting in denser layouts.
2.3.5.1 DDR3 SDRAM Components With or Without Leveling
The DDR3 SDRAM UDIMM is laid out to the JEDEC specification. The JEDEC
specification is available from either the JEDEC Organization website (www.JEDEC.org)
or from the memory vendors. However, when you are designing the DDR3 SDRAM
interface using discrete SDRAM components, you may desire a layout scheme that is
different than the DIMM specification.
You have the following options:
•
Mimic the standard DDR3 SDRAM DIMM, using a fly-by topology for the memory
clocks, address, and command signals. This option needs read and write leveling,
so you must use the UniPHY IP with leveling.
•
Mimic a standard DDR2 SDRAM DIMM, using a balanced (symmetrical) tree-type
topology for the memory clocks, address, and command signals. Using this
topology results in unwanted stubs on the command, address, and clock, which
degrades signal integrity and limits the performance of the DDR3 SDRAM
interface.
Related Links
•
Layout Guidelines for DDR3 and DDR4 SDRAM Interfaces on page 115
The following table lists DDR3 and DDR4 SDRAM layout guidelines.
•
www.JEDEC.org
2.4 DDR3 and DDR4 on Arria 10 and Stratix 10 Devices
The following topics describe considerations specific to DDR3 and DDR4 external
memory interface protocols on Arria 10 and Stratix 10 devices.
Related Links
www.JEDEC.org
External Memory Interface Handbook Volume 2: Design Guidelines
103
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2.4.1 Dynamic On-Chip Termination (OCT) in Arria 10 and Stratix 10
Devices
Depending upon the Rs (series) and Rt (parallel) OCT values that you want, you
should choose appropriate values for the RZQ resistor and connect this resistor to the
RZQ pin of the FPGA.
•
Select a 240-ohm reference resistor to ground to implement Rs OCT values of 34ohm, 40-ohm, 48-ohm, 60-ohm, and 80-ohm, and Rt OCT resistance values of 20ohm, 30-ohm, 34-ohm, 40-ohm, 60-ohm, 80-ohm, 120-ohm and 240 ohm.
•
Select a 100-ohm reference resistor to ground to implement Rs OCT values of 25ohm and 50-ohm, and an RT OCT resistance of 50-ohm.
Check the FPGA I/O tab of the parameter editor to determine the I/O standards and
termination values supported for data, address and command, and memory clock
signals.
2.4.2 Dynamic On-Die Termination (ODT) in DDR4
In DDR4, in addition to the Rtt_nom and Rtt_wr values, which are applied during read
and write respectively, a third option called Rtt_park is available. When Rtt_park is
enabled, a selected termination value is set in the DRAM when ODT is driven low.
Rtt_nom and Rtt_wr work the same as in DDR3, which is described in Dynamic ODT
for DDR3.
Refer to the DDR4 JEDEC specification or your memory vendor data sheet for details
about available termination values and functional description for dynamic ODT in
DDR4 devices.
For DDR4 LRDIMM, if SPD byte 152 calls for different values of Rtt_Park to be used
for package ranks 0 and 1 versus package ranks 2 and 3, set the value to the larger of
the two impedance settings.
2.4.3 Choosing Terminations on Arria 10 Devices
To determine optimal on-chip termination (OCT) and on-die termination (ODT) values
for best signal integrity, you should simulate your memory interface in HyperLynx or a
similar tool.
If the optimal OCT and ODT termination values as determined by simulation are not
available in the list of available values in the parameter editor, select the closest
available termination values for OCT and ODT.
Refer to Dynamic On-Chip Termination (OCT) in Arria 10 Devices for examples of
various OCT modes. Refer to the Arria 10 Device Handbook for more information
about OCT. For information on available ODT choices, refer to your memory vendor
data sheet.
Related Links
Dynamic On-Chip Termination (OCT) in Arria 10 and Stratix 10 Devices on page 89
Depending upon the Rs (series) and Rt (parallel) OCT values that you want, you
should choose appropriate values for the RZQ resistor and connect this resistor to
the RZQ pin of the FPGA.
External Memory Interface Handbook Volume 2: Design Guidelines
104
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2.4.4 On-Chip Termination Recommendations for DDR3 and DDR4 on Arria
10 Devices
•
Output mode (drive strength) for Address/Command/Clock and Data Signals:
Depending upon the I/O standard that you have selected, you would have a range
of selections expressed in terms of ohms or miliamps. A value of 34 to 40 ohms or
12 mA is a good starting point for output mode drive strength.
•
Input mode (parallel termination) for Data and Data Strobe signals: A value of 40
or 60 ohms is a good starting point for FPGA side input termination.
2.5 Layout Approach
For all practical purposes, you can regard the TimeQuest timing analyzer's report on
your memory interface as definitive for a given set of memory and board timing
parameters.
You will find timing under Report DDR in TimeQuest and on the Timing Analysis tab
in the parameter editor.
The following flowchart illustrates the recommended process to follow during the
board design phase, to determine timing margin and make iterative improvements to
your design.
Primary Layout
Calculate Setup
and Hold Derating
Adjust Layout to Improve:
• Trace Length Mis-Match
• Signal Reflections (ISI)
• Cross Talk
• Memory Speed Grade
Calculate Channel
Signal Integrity
Calculate Board
Skews
Find Memory
Timing Parameters
Generate an IP Core that Accurately Represents Your
Memory Subsystem, Including pin-out and Accurate
Parameters in the Parameter Editor’s Board Settings Tab
Run Quartus Prime Compilation with the Generated IP Core
yes
Any Non-Core Timing
Violations in the Report
DDR Panel?
no
Done
Board Skew
For information on calculating board skew parameters, refer to Implementing and
Parameterizing Memory IP, in the External Memory Interface Handbook .
External Memory Interface Handbook Volume 2: Design Guidelines
105
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
The Board Skew Parameter Tool is an interactive tool that can help you calculate board
skew parameters if you know the absolute delay values for all the memory related
traces.
Memory Timing Parameters
For information on the memory timing parameters to be entered into the parameter
editor, refer to the datasheet for your external memory device.
Related Links
Board Skew Parameter Tool
2.6 Channel Signal Integrity Measurement
As external memory interface data rates increase, so does the importance of proper
channel signal integrity measurement.By measuring the actual channel loss during the
layout process and including that data in your parameterization, a realistic assessment
of margins is achieved.
2.6.1 Importance of Accurate Channel Signal Integrity Information
Default values for channel loss (or eye reductoin) can be used when calculating timing
margins, however those default values may not accurately reflect the channel loss in
your system.If the channel loss in your system is different than the default values, the
calculated timing margins will vary accordingly.
If your actual channel loss is greater than the default channel loss, and if you rely on
default values, the available timing margins for the entire system will be lower than
the values calculated during compilation. By relying on default values that do not
accurately reflect your system, you may be lead to believe that you have good timing
margin, while in reality, your design may require changes to achieve good channel
signal integrity.
2.6.2 Understanding Channel Signal Integrity Measurement
To measure channel signal integrity you need to measure the channel loss for various
signals.For a particular signal or signal trace, channel loss is defined as loss of the eye
width at +/- VIH(ac and dc) +/- VIL(ac and dc). VIH/VIL above or below VREF is used to
align with various requirements of the timing model for memory interfaces.
The example below shows a reference eye diagram where the channel loss on the
setup- or leading-side of the eye is equal to the channel loss on the hold- or laggingside of the eye; howevever, it does not necessarily have to be that way. Because
Intel's calibrating PHY will calibrate to the center of the read and write eye, the Board
Settings tab has parameters for the total extra channel loss for Write DQ and Read
DQ. For address and command signals which are not-calibrated, the Board Settings
tab allows you to enter setup- and hold-side channel losses that are not equal,
allowing the Quartus Prime software to place the clock statically within the center of
the address and command eye.
External Memory Interface Handbook Volume 2: Design Guidelines
106
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 21.
Equal Setup and Hold-side Losses
2.6.3 How to Enter Calculated Channel Signal Integrity Values
You should enter calculated channel loss values in the Channel Signal Integrity
section of the Board (or Board Timing) tab of the parameter editor.
Arria V, Cyclone V, and Stratix V
For 28nm families, fixed values are assigned to different signals within the timing
analysis algorithms of the Quartus Prime software. The following table shows the
values for different signal groups:
Assumed Channel Loss
Signal Group
Address/Command (output)
250 ps
Write (output)
350 ps
Read Capture (input)
225 ps
If your calculated values are higher than the assumed channel loss, you must enter
the positive difference; if your calculated values are lower than the assumed channel
loss, you must enter the negative difference. For example, if the measured channel
loss for reads for your system is 250 ps then you should enter 25 ps as the read
channel loss.
Arria 10 and Stratix 10
For Arria 10 and Stratix 10 EMIF IP, the default channel loss displayed in the
parameter editor is based on the selected configuration (different values for single
rank versus dual rank), and on internal Intel reference boards. You should replace the
default value with the value that you calculate.
External Memory Interface Handbook Volume 2: Design Guidelines
107
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2.6.4 Guidelines for Calculating DDR3 Channel Signal Integrity
Address and Command ISI and Crosstalk
Simulate the address/command and control signals and capture eye at the DRAM pins,
using the memory clock as the trgger for the memory interface's address/command
and control signals. Measure the setup and hold channel losses at the voltage
thresholds mentioned in the memory vendor's data sheet.
Address and command channel loss = Measured loss on the setup side + measured
loss on the hold side.
VREF = VDD/2 = 0.75 mV for DDR3
You should select the VIH and VIL voltage levels appropriately for the DDR3L memory
device that you are using. Check with your memory vendor for the correct voltage
levels, as the levels may vary for different speed grades of device.
The following figure illustrates a DDR3 example where VIH(AC)/ VIL(AC) is +/- 150 mV
and VIH(DC)/ VIL(DC) is +/- 100 mV.
Figure 22.
Write DQ ISI and Crosstalk
Simulate the write DQ signals and capture eye at the DRAM pins, using DQ Strobe
(DQS) as a trigger for the DQ signals of the memory interface simulation. Measure the
setup and hold channel lossses at the VIH and VIL mentioned in the memory vendor's
data sheet. The following figure illustrates a DDR3 example where VIH(AC)/ VIL(AC) is
+/- 150 mV and VIH(DC)/ VIL(DC) is +/- 100 mV.
Write Channel Loss = Measured Loss on the Setup side + Measured Loss on the Hold
side
VREF = VDD/2 = 0.75 mV for DDR3
External Memory Interface Handbook Volume 2: Design Guidelines
108
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 23.
Read DQ ISI and Crosstalk
Simulate read DQ signals and capture eye at the FPGA die. Do not measure at the pin,
because you might see unwanted reflections that could create a false representation of
the eye opening at the input buffer of the FPGA. Use DQ Strobe (DQS) as a trigger for
the DQ signals of your memory interface simulation. Measure the eye opening at +/70 mV (VIH/VIL) with respect to VREF.
Read Channel Loss = (UI) - (Eye opening at +/- 70 mV with respect to VREF)
UI = Unit interval. For example, if you are running your interface at 800 Mhz, the
effective data is 1600 Mbps, giving a unit interval of 1/1600 = 625 ps
VREF = VDD/2 = 0.75 mV for DDR3
Figure 24.
External Memory Interface Handbook Volume 2: Design Guidelines
109
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Write/Read DQS ISI and Crosstalk
Simulate the Write/Read DQS and capture eye, and measure the uncertainty at VREF.
VREF = VDD/2 = 0.75 mV for DDR3
Figure 25.
2.6.5 Guidelines for Calculating DDR4 Channel Signal Integrity
Address and Command ISI and Crosstalk
Simulate the address/command and control signals and capture eye at the DRAM pins,
using the memory clock as the trgger for the memory interface's address/command
and control signals. Measure the setup and hold channel losses at the voltage
thresholds mentioned in the memory vendor's data sheet.
Address and command channel loss = Measured loss on the setup side + measured
loss on the hold side.
VREF = VDD/2 = 0.75 mV for address/command for DDR4.
You should select the VIH and VIL voltage levels appropriately for the DDR4 memory
device that you are using. Check with your memory vendor for the correct voltage
levels, as the levels may vary for different speed grades of device.
The following figure illustrates a DDR4-1200 example, where VIH(AC)/ VIL(AC) is +/- 100
mV and VIH(DC)/ VIL(DC) is +/- 75 mV.
Select the VIH(AC), VIL(AC), VIH(DC), and VIL(DC)for the speed grade of DDR4 memory
device from the memory vendor's data sheet.
External Memory Interface Handbook Volume 2: Design Guidelines
110
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 26.
Write DQ ISI and Crosstalk
Simulate the write DQ signals and capture eye at the DRAM pins, using DQ Strobe
(DQS) as a trigger for the DQ signals of the memory interface simulation. Measure the
setup and hold channel lossses at the VIH and VIL mentioned in the memory vendor's
data sheet
Write Channel Loss = Measured Loss on the Setup side + Measured Loss on the Hold
side.
or
Write Channel Loss = UI – (Eye opening at VIH or VIL).
VREF = Voltage level where the eye opening is highest.
VIH = VREF + (0.5 x VdiVW).
VIL = VREF - (0.5 x VdiVW).
Where VdiVW varies by frequency of operation; you can find the VdiVW value in your
memory vendor's data sheet.
External Memory Interface Handbook Volume 2: Design Guidelines
111
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 27.
Read DQ ISI and Crosstalk
Simulate read DQ signals and capture eye at the FPGA die. Do not measure at the pin,
because you might see unwanted reflections that could create a false representation of
the eye opening at the input buffer of the FPGA. Use DQ Strobe (DQS) as a trigger for
the DQ signals of your memory interface simulation. Measure the eye opening at +/70 mV (VIH/VIL) with respect to VREF.
Read Channel Loss = (UI) - (Eye opening at +/- 70 mV with respect to VREF.)
UI = Unit interval. For example, if you are running your interface at 800 Mhz, the
effective data is 1600 Mbps, giving a unit interval of 1/1600 = 625 ps.
VREF = Voltage level where the eye opening is highest.
Figure 28.
External Memory Interface Handbook Volume 2: Design Guidelines
112
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Write/Read DQS ISI and Crosstalk
Simulate write and read DQS and capture eye. Measure the uncertainty at VREF.
VREF = Voltage level where the eye opening is the highest.
Figure 29.
2.7 Design Layout Guidelines
The general layout guidelines in the following topic apply to DDR3 and DDR4 SDRAM
interfaces.
These guidelines will help you plan your board layout, but are not meant as strict rules
that must be adhered to. Intel recommends that you perform your own board-level
simulations to ensure that the layout you choose for your board allows you to achieve
your desired performance.
For more information about how the memory manufacturers route these address and
control signals on their DIMMs, refer to the Cadence PCB browser from the Cadence
website, at www.cadence.com. The various JEDEC example DIMM layouts are available
from the JEDEC website, at www.jedec.org.
For more information about board skew parameters, refer to Board Skews in the
Implementing and Parameterizing Memory IP chapter. For assistance in calculating
board skew parameters, refer to the board skew calculator tool, which is available at
the Intel website.
Note:
1.
The following layout guidelines include several +/- length based rules. These
length based guidelines are for first order timing approximations if you cannot
simulate the actual delay characteristic of the interface. They do not include any
margin for crosstalk.
2.
To ensure reliable timing closure to and from the periphery of the device, signals
to and from the periphery should be registered before any further logic is
connected.
External Memory Interface Handbook Volume 2: Design Guidelines
113
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Intel recommends that you get accurate time base skew numbers for your design
when you simulate the specific implementation.
Related Links
•
www.JEDEC.org
•
www.cadence.com
•
www.mentor.com
•
Board Skew Parameters Tool
•
http://www.jedec.org/download/DesignFiles/DDR2/default1.cfm
2.7.1 General Layout Guidelines
The following table lists general board design layout guidelines. These guidelines are
Intel recommendations, and should not be considered as hard requirements. You
should perform signal integrity simulation on all the traces to verify the signal integrity
of the interface. You should extract the slew rate and propagation delay information,
enter it into the IP and compile the design to ensure that timing requirements are
met.
Table 33.
General Layout Guidelines
Parameter
Impedance
Guidelines
•
•
Decoupling Parameter
•
•
•
•
•
Power
•
•
•
•
General Routing
All unused via pads must be removed, because they cause unwanted
capacitance.
Trace impedance plays an important role in the signal integrity. You must
perform board level simulation to determine the best characteristic impedance
for your PCB. For example, it is possible that for multi rank systems 40 ohms
could yield better results than a traditional 50 ohm characteristic impedance.
Use 0.1 uF in 0402 size to minimize inductance
Make VTT voltage decoupling close to termination resistors
Connect decoupling caps between VTT and ground
Use a 0.1 uF cap for every other VTT pin and 0.01 uF cap for every VDD and
VDDQ pin
Verify the capacitive decoupling using the Intel Power Distribution Network
Design Tool
Route GND and VCC as planes
Route VCCIO for memories in a single split plane with at least a 20-mil
(0.020 inches, or 0.508 mm) gap of separation
Route VTT as islands or 250-mil (6.35-mm) power traces
Route oscillators and PLL power as islands or 100-mil (2.54-mm) power traces
All specified delay matching requirements include PCB trace delays, different layer
propagation velocity variance, and crosstalk. To minimize PCB layer propogation
variance, Intel recommends that signals from the same net group always be
routed on the same layer.
• Use 45° angles (not 90° corners)
• Avoid T-Junctions for critical nets or clocks
• Avoid T-junctions greater than 250 mils (6.35 mm)
• Disallow signals across split planes
• Restrict routing other signals close to system reset signals
• Avoid routing memory signals closer than 0.025 inch (0.635 mm) to PCI or
system clocks
Related Links
Power Distribution Network Design Tool
External Memory Interface Handbook Volume 2: Design Guidelines
114
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
2.7.2 Layout Guidelines for DDR3 and DDR4 SDRAM Interfaces
The following table lists DDR3 and DDR4 SDRAM layout guidelines.
Unless otherwise specified, the guidelines in the following table apply to the following
topologies:
•
DIMM—UDIMM topology
•
DIMM—RDIMM topology
•
DIMM—LRDIMM topology
•
Not all versions of the Quartus Prime software support LRDIMM.
•
Discrete components laid out in UDIMM topology
•
Discrete components laid out in RDIMM topology
These guidelines are recommendations, and should not be considered as hard
requirements. You should perform signal integrity simulation on all the traces to verify
the signal integrity of the interface.
Unless stated otherwise, the following guidelines apply to all devices that support
DDR3 or DDR4, including Arria 10 and Stratix 10.
For information on the simulation flow for 28nm products, refer to http://
www.alterawiki.com/wiki/Measuring_Channel_Signal_Integrity.
For information on the simulation flow for Arria 10 products, refer to http://
www.alterawiki.com/wiki/Arria_10_EMIF_Simulation_Guidance.
http://www.altera.com/technology/memory/estimator/mem-emif-index.html
For supported frequencies and topologies, refer to the External Memory Interface Spec
Estimator http://www.altera.com/technology/memory/estimator/mem-emifindex.html.
For frequencies greater than 800 MHz, when you are calculating the delay associated
with a trace, you must take the FPGA package delays into consideration. For more
information, refer to Package Deskew.
For device families that do not support write leveling, refer to Layout Guidelines for
DDR2 SDRAM Interfaces.
Table 34.
DDR3 and DDR4 SDRAM Layout Guidelines
Parameter
Decoupling Parameter
Guidelines
•
•
•
Maximum Trace Length
(2)
(1)
•
•
•
•
Make VTT voltage decoupling close to the components and pull-up resistors.
Connect decoupling caps between VTT and VDD using a 0.1F cap for every
other VTT pin.
Use a 0.1 uF cap and 0.01 uF cap for every VDDQ pin.
Even though there are no hard requirements for minimum trace length, you
need to simulate the trace to ensure the signal integrity. Shorter routes result
in better timing.
For DIMM topology only:
Maximum trace length for all signals from FPGA to the first DIMM slot is 4.5
inches.
Maximum trace length for all signals from DIMM slot to DIMM slot is 0.425
inches.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
115
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Parameter
Guidelines
•
•
•
For discrete components only:
Maximum trace length for address, command, control, and clock from FPGA to
the first component must not be more than 7 inches.
Maximum trace length for DQ, DQS, DQS#, and DM from FPGA to the first
component is 5 inches.
General Routing
•
•
Route over appropriate VCC and GND planes.
Keep signal routing layers close to GND and power planes.
Spacing Guidelines
•
Avoid routing two signal layers next to each other. Always make sure that the
signals related to memory interface are routed between appropriate GND or
power layers.
For DQ/DQS/DM traces: Maintain at least 3H spacing between the edges (airgap) for these traces. (Where H is the vertical distance to the closest return
path for that particular trace.)
For Address/Command/Control traces: Maintain at least 3H spacing between
the edges (air-gap) these traces. (Where H is the vertical distance to the
closest return path for that particular trace.)
For Clock traces: Maintain at least 5H spacing between two clock pair or a
clock pair and any other memory interface trace. (Where H is the vertical
distance to the closest return path for that particular trace.)
•
•
•
Clock Routing
•
•
•
•
•
•
•
•
Address and Command Routing
•
•
•
•
Route clocks on inner layers with outer-layer run lengths held to under 500
mils (12.7 mm).
Route clock signals in a daisy chain topology from the first SDRAM to the last
SDRAM. The maximum length of the first SDRAM to the last SDRAM must not
exceed 0.69 tCK for DDR3 and 1.5 tCK for DDR4. For different DIMM
configurations, check the appropriate JEDEC specification.
These signals should maintain the following spacings:
Clocks should maintain a length-matching between clock pairs of ±5 ps.
Clocks should maintain a length-matching between positive (p) and negative
(n) signals of ±2 ps, routed in parallel.
Space between different pairs should be at least two times the trace width of
the differential pair to minimize loss and maximize interconnect density.
To avoid mismatched transmission line to via, Intel recommends that you use
Ground Signal Signal Ground (GSSG) topology for your clock pattern—GND|
CLKP|CKLN|GND.
Route all addresses and commands to match the clock signals to within ±20 ps
to each discrete memory component. Refer to the following figure.
Route address and command signals in a daisy chain topology from the first
SDRAM to the last SDRAM. The maximum length of the first SDRAM to the last
SDRAM must not be more than 0.69 tCK for DDR3 and 1.5 tCK for DDR4. For
different DIMM configurations, check the appropriate JEDEC specifications.
UDIMMs are more susceptible to cross-talk and are generally noisier than
buffered DIMMs. Therefore, route address and command signals of UDIMMs on
a different layer than data signals (DQ) and data mask signals (DM) and with
greater spacing.
Do not route differential clock (CK) and clock enable (CKE) signals close to
address signals.
Route all addresses and commands to match the clock signals to within ±20 ps
to each discrete memory component. Refer to the following figure.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
116
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Parameter
DQ, DM, and DQS Routing Rules
Guidelines
•
•
•
•
•
Spacing Guidelines
•
•
•
•
Quartus Prime Software Settings for
Board Layout
•
•
All the trace length matching requirements are from the FPGA package ball to
the SDRAM package ball, which means you must consider trace mismatching
on different DIMM raw cards.
Match in length all DQ, DQS, and DM signals within a given byte-lane group
with a maximum deviation of ±10 ps.
Ensure to route all DQ, DQS, and DM signals within a given byte-lane group on
the same layer to avoid layer to layer transmission velocity differences, which
otherwise increase the skew within the group.
Do not count on FPGAs to deskew for more than 20 ps of DQ group skew. The
skew algorithm only removes the following possible uncertainties:
— Minimum and maximum die IOE skew or delay mismatch
— Minimum and maximum device package skew or mismatch
— Board delay mismatch of 20 ps
— Memory component DQ skew mismatch
— Increasing any of these four parameters runs the risk of the deskew
algorithm limiting, failing to correct for the total observed system skew. If
the algorithm cannot compensate without limiting the correction, timing
analysis shows reduced margins.
For memory interfaces with leveling, the timing between the DQS and clock
signals on each device calibrates dynamically to meet tDQSS. To make sure
the skew is not too large for the leveling circuit’s capability, follow these rules:
— Propagation delay of clock signal must not be shorter than propagation
delay of DQS signal at every device: (CKi) – DQSi > 0; 0 < i < number of
components – 1 . For DIMMs, ensure that the CK trace is longer than the
longest DQS trace at the DIMM connector.
— Total skew of CLK and DQS signal between groups is less than one clock
cycle: (CKi+ DQSi) max – (CKi+ DQSi) min < 1 × tCK(If you are using a
DIMM topology, your delay and skew must take into consideration values
for the actual DIMM.)
Avoid routing two signal layers next to each other. Always ensure that the
signals related to the memory interface are routed between appropriate GND
or power layers.
For DQ/DQS/DM traces: Maintain at least 3H spacing between the edges (airgap) of these traces, where H is the vertical distance to the closest return path
for that particular trace.
For Address/Command/Control traces: Maintain at least 3H spacing between
the edges (air-gap) of these traces, where H is the vertical distance to the
closest return path for that particular trace.
For Clock traces: Maintain at least 5H spacing between two clock pairs or a
clock pair and any other memory interface trace, where H is the vertical
distance to the closest return path for that particular trace.
To perform timing analyses on board and I/O buffers, use third party
simulation tool to simulate all timing information such as skew, ISI, crosstalk,
and type the simulation result into the UniPHY board setting panel.
Do not use advanced I/O timing model (AIOT) or board trace model unless you
do not have access to any third party tool. AIOT provides reasonable accuracy
but tools like HyperLynx provide better results.
Notes to Table:
1. For point-to-point and DIMM interface designs, refer to the Micron website, www.micron.com.
2. For better efficiency, the UniPHY IP requires faster turnarounds from read commands to write.
Related Links
•
Layout Guidelines for DDR2 SDRAM Interface on page 96
Unless otherwise specified, the following guidelines apply to the following
topologies:
•
Package Deskew on page 123
External Memory Interface Handbook Volume 2: Design Guidelines
117
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Trace lengths inside the device package are not uniform for all package pins.
The nonuniformity of package traces can affect system timing for high
frequencies.
•
External Memory Interface Spec Estimator
•
www.micron.com
2.7.3 Length Matching Rules
The following topics provide guidance on length matching for different types of DDR3
signals.
Route all addresses and commands to match the clock signals to within ±20 ps to
each discrete memory component. The following figure shows the DDR3 and DDR4
SDRAM component routing guidelines for address and command signals.
Figure 30.
DDR3 and DDR4 SDRAM Component Address and Command Routing
Guidelines
Propagation delay < 0.69 tCK for DDR3
Propagation delay < 1.5 tCK for DDR4
FPGA
clock
address and
command
x
x1
y
If using discrete components:
x = y ± 20 ps
x + x1 = y + y1 ± 20 ps
x + x1 + x2 = y + y1 + y2 ± 20 ps
x + x1 + x2 + x3 = y + y1 + y2 +y3 ± 20 ps
x2
SDRAM
Component
SDRAM
Component
y1
VTT
VTT
x3
SDRAM
Component
y2
SDRAM
Component
y3
If using a DIMM topology:
x=y +/- 20 ps
The timing between the DQS and clock signals on each device calibrates dynamically
to meet tDQSS. The following figure shows the delay requirements to align DQS and
clock signals. To ensure that the skew is not too large for the leveling circuit’s
capability, follow these rules:
•
Propagation delay of clock signal must not be shorter than propagation delay of
DQS signal at every device:
CKi
•
– DQSi > 0; 0 < i < number of components – 1
Total skew of CLK and DQS signal between groups is less than one clock cycle:
(CKi + DQSi) max – (CKi + DQSi) min < 1 × tCK
External Memory Interface Handbook Volume 2: Design Guidelines
118
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 31.
Delaying DQS Signal to Align DQS and Clock
DDR3
Component
CK0
DQ Group 0
DDR3
Component
CK1
DDR3
Component
CKi
DQ Group 1
VTT
DQSi
DQ Group i
CK
FPGA
CKi = Clock signal propagation delay to device i
DQSi = DQ/DQS signals propagation delay to group i
Clk pair matching—If you are using a DIMM (UDIMM, RDIMM, or LRDIMM) topology,
match the trace lengths up to the DIMM connector. If you are using discrete
components, match the lengths for all the memory components connected in the flyby chain.
DQ group length matching—If you are using a DIMM (UDIMM, RDIMM, or LRDIMM)
topology, apply the DQ group trace matching rules described in the guideline table
earlier up to the DIMM connector. If you are using discrete components, match the
lengths up to the respective memory components.
When you are using DIMMs, it is assumed that lengths are tightly matched within the
DIMM itself. You should check that appropriate traces are length-matched within the
DIMM.
2.7.4 Spacing Guidelines
This topic provides recommendations for minimum spacing between board traces for
various signal traces.
Spacing Guidelines for DQ, DQS, and DM Traces
Maintain a minimum of 3H spacing between the edges (air-gap) of these traces.
(Where H is the vertical distance to the closest return path for that particular trace.)
GND or Power
H
3H
GND or Power
H
Spacing Guidelines for Address and Command and Control Traces
Maintain at least 3H spacing between the edges (air-gap) of these traces. (Where H is
the vertical distance to the closest return path for that particular trace.)
External Memory Interface Handbook Volume 2: Design Guidelines
119
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
GND or Power
H
3H
GND or Power
H
Spacing Guidelines for Clock Traces
Maintain at least 5H spacing between two clock pair or a clock pair and any other
memory interface trace. (Where H is the vertical distance to the closest return path for
that particular trace.)
GND or Power
5H
GND or Power
H
H
2.7.5 Layout Guidelines for DDR3 and DDR4 SDRAM Wide Interface
(>72 bits)
The following topics discuss different ways to lay out a wider DDR3 or DDR4 SDRAM
interface to the FPGA. Choose the topology based on board trace simulation and the
timing budget of your system.
The UniPHY IP supports up to a 144-bit wide DDR3 interface. You can either use
discrete components or DIMMs to implement a wide interface (any interface wider
than 72 bits). Intel recommends using leveling when you implement a wide interface
with DDR3 components.
When you lay out for a wider interface, all rules and constraints discussed in the
previous sections still apply. The DQS, DQ, and DM signals are point-to-point, and all
the same rules discussed in Design Layout Guidelines apply.
The main challenge for the design of the fly-by network topology for the clock,
command, and address signals is to avoid signal integrity issues, and to make sure
you route the DQS, DQ, and DM signals with the chosen topology.
Related Links
Design Layout Guidelines on page 113
The general layout guidelines in the following topic apply to DDR3 and DDR4
SDRAM interfaces.
2.7.5.1 Fly-By Network Design for Clock, Command, and Address Signals
The UniPHY IP requires the flight-time skew between the first DDR3 SDRAM
component and the last DDR3 SDRAM component to be less than 0.69 tCK for
memory clocks. This constraint limits the number of components you can have for
each fly-by network.
External Memory Interface Handbook Volume 2: Design Guidelines
120
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
If you design with discrete components, you can choose to use one or more fly-by
networks for the clock, command, and address signals.
The following figure shows an example of a single fly-by network topology.
Figure 32.
Single Fly-By Network Topology
FPGA
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
VTT
Less than 0.69 tCK
Every DDR3 SDRAM component connected to the signal is a small load that causes
discontinuity and degrades the signal. When using a single fly-by network topology, to
minimize signal distortion, follow these guidelines:
•
Use ×16 device instead ×4 or ×8 to minimize the number of devices connected to
the trace.
•
Keep the stubs as short as possible.
•
Even with added loads from additional components, keep the total trace length
short; keep the distance between the FPGA and the first DDR3 SDRAM component
less than 5 inches.
•
Simulate clock signals to ensure a decent waveform.
The following figure shows an example of a double fly-by network topology. This
topology is not rigid but you can use it as an alternative option. The advantage of
using this topology is that you can have more DDR3 SDRAM components in a system
without violating the 0.69 tCK rule. However, as the signals branch out, the
components still create discontinuity.
External Memory Interface Handbook Volume 2: Design Guidelines
121
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 33.
Double Fly-By Network Topology
V TT
FPGA
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
Less than 0.69 t CK
V TT
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
DDR3
SDRAM
Less than 0.69 t CK
You must perform simulations to find the location of the split, and the best impedance
for the traces before and after the split.
The following figure shows a way to minimize the discontinuity effect. In this example,
keep TL2 and TL3 matches in length. Keep TL1 longer than TL2 and TL3, so that it is
easier to route all the signals during layout.
External Memory Interface Handbook Volume 2: Design Guidelines
122
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Figure 34.
Minimizing Discontinuity Effect
TL2, ZQ = 50Ω
TL1, ZQ = 25Ω
Splitting Point
TL3, ZQ = 50Ω
You can also consider using a DIMM on each branch to replace the components.
Because the trade impedance on the DIMM card is 40-ohm to 60-ohm, perform a
board trace simulation to control the reflection to within the level your system can
tolerate.
By using the new features of the DDR3 SDRAM controller with UniPHY and the
Stratix III, Stratix IV, or Stratix V devices, you simplify your design process. Using the
fly-by daisy chain topology increases the complexity of the datapath and controller
design to achieve leveling, but also greatly improves performance and eases board
layout for DDR3 SDRAM.
You can also use the DDR3 SDRAM components without leveling in a design if it may
result in a more optimal solution, or use with devices that support the required
electrical interface standard, but do not support the required read and write leveling
functionality.
2.8 Package Deskew
Trace lengths inside the device package are not uniform for all package pins. The
nonuniformity of package traces can affect system timing for high frequencies. In the
Quartus II software version 12.0 and later, and the Quartus Prime software, a package
deskew option is available.
If you do not enable the package deskew option, the Quartus Prime software uses the
package delay numbers to adjust skews on the appropriate signals; you do not need
to adjust for package delays on the board traces. If you do enable the package
deskew option, the Quartus Prime software does not use the package delay numbers
for timing analysis, and you must deskew the package delays with the board traces for
the appropriate signals for your design.
2.8.1 Package Deskew Recommendation for Stratix V Devices
Package deskew is not required for any memory protocol operating at 800 MHz or
below.
External Memory Interface Handbook Volume 2: Design Guidelines
123
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
For DDR3 and RLDRAM3 designs operating above 800 MHz, you should run timing
analysis with accurately entered board skew parameters in the parameter editor. If
Report DDR reports non-core timing violations, you should then perform the steps in
the following topics, and modify your board layout. Package deskew is not required for
any protocols other than DDR3 and RLDRAM 3.
2.8.2 DQ/DQS/DM Deskew
To get the package delay information, follow these steps:
1. Select the FPGA DQ/DQS Package Skews Deskewed on Board checkbox on
the Board Settings tab of the parameter editor.
2.
Generate your IP.
3.
Instantiate your IP in the project.
4.
Run Analysis and Synthesis in the Quartus Prime software. (Skip this step if you
are using an Arria 10 device.)
5.
Run the <core_name>_p0_pin_assignment.tcl script. (Skip this step if you
are using an Arria 10 device.)
6. Compile your design.
7. Refer to the All Package Pins compilation report, or find the pin delays displayed
in the <core_name>.pin file.
2.8.3 Address and Command Deskew
Deskew address and command delays as follows:
1. Select the FPGA Address/Command Package Skews Deskewed on Board
checkbox on the Board Settings tab of the parameter editor.
2. Generate your IP.
3. Instantiate your IP in the project.
4. Run Analysis and Synthesis in the Quartus Prime software. (Skip this step if you
are using an Arria 10 device.)
5. Run the <core_name>_p0_pin_assignment.tcl script. (Skip this step if you
are using an Arria 10 device.)
6. Compile your design.
7. Refer to the All Package Pins compilation report, or find the pin delays displayed
in the <core_name>.pin file.
2.8.4 Package Deskew Recommendations for Arria 10 and Stratix 10
Devices
The following table shows package deskew recommendations for all protocols
supported on Arria 10 devices.
As operating frequencies increase, it becomes increasingly critical to perform package
deskew. The frequencies listed in the table are the minimum frequencies for which you
must perform package deskew.
External Memory Interface Handbook Volume 2: Design Guidelines
124
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
If you plan to use a listed protocol at the specified frequency or higher, you must
perform package deskew. For example, you must perform package deskew if you plan
to use dual-rank DDR4 at 800 MHz or above.
Protocol
Minimum Frequency (MHz) for Which to Perform Package Deskew
Single Rank
Dual Rank
Quad Rank
DDR4
933
800
667
DDR3
933
800
667
LPDDR3
667
533
Not required
QDR IV
933
Not applicable
Not applicable
RLDRAM 3
933
667
Not applicable
RLDRAM II
Not required
Not applicable
Not applicable
QDR II, II+, II+ Xtreme
Not required
Not applicable
Not applicable
2.8.5 Deskew Example
Consider an example where you want to deskew an interface with 4 DQ pins, 1 DQS
pin, and 1 DQSn pin.
Let’s assume an operating frequency of 667 MHz, and the package lengths for the pins
reported in the .pin file as follows:
dq[0]
dq[1]
dq[2]
dq[3]
dqs
dqs_n
=
=
=
=
=
=
120 ps
120 ps
100 ps
100 ps
80 ps
80 ps
The following figure illustrates this example.
Figure 35.
Deskew Example
Memory
Stratix V
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
120 ps
120 ps
100 ps
100 ps
A
B
C
D
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
mem_dqs
mem_dqs_n
80 ps
80 ps
E
F
mem_dqs
mem_dqs_n
External Memory Interface Handbook Volume 2: Design Guidelines
125
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
When you perform length matching for all the traces in the DQS group, you must take
package delays into consideration. Because the package delays of traces A and B are
40 ps longer than the package delays of traces E and F, you would need to make the
board traces for E and F 40 ps longer than the board traces for A and B.
A similar methodology would apply to traces C and D, which should be 20 ps longer
than the lengths of traces A and B.
The following figure shows this scenario with the length of trace A at 450 ps.
Figure 36.
Deskew Example with Trace Delay Calculations
Memory
Stratix V
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
120 ps
120 ps
100 ps
100 ps
A=450ps
B=A=450ps
C=A+20ps=470ps
C=A+20ps=470ps
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
mem_dqs
mem_dqs_n
80 ps
80 ps
C=A+40ps=490ps
C=A+40ps=490ps
mem_dqs
mem_dqs_n
When you enter the board skews into the Board Settings tab of the DDR3 parameter
editor, you should calculate the board skew parameters as the sums of board delay
and corresponding package delay. If a pin does not have a package delay (such as
address and command pins), you should use the board delay only.
The example of the preceding figure shows an ideal case where board skews are
perfectly matched. In reality, you should allow plus or minus 10 ps of skew mismatch
within a DQS group (DQ/DQS/DM).
2.8.6 Package Migration
Package delays can be different for the same pin in different packages. If you want to
use multiple migratable packages in your system, you should compensate for package
skew as described in this topic. The information in this topic applies to Arria 10,
Stratix V, and Stratix 10 devices.
External Memory Interface Handbook Volume 2: Design Guidelines
126
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Scenario 1
Your PCB is designed for multiple migratable devices, but you have only one device
with which to go to production.
Assume two migratable packages, device A and device B, and that you want to go to
production with device A. Follow these steps:
1.
Perform package deskew for device A.
2.
Compile your design for device A, with the Package Skew option enabled.
3. Note the skews in the <core_name>.pin file for device A. Deskew these package
skews with board trace lengths as described in the preceding examples.
4.
Recompile your design for device A.
5. For device B, open the parameter editor and deselect the Package Deskew
option.
6.
Calculate board skew parameters, only taking into account the board traces for
device B, and enter that value into the parameter editor for device B.
7. Regenerate the IP and recompile the design for device B.
8. Verify that timing requirements are met for both device A and device B.
Scenario 2
Your PCB is designed for multiple migratable devices, and you want to go to
production with all of them.
Assume you have device A and device B, and plan to use both devices in production.
Follow these steps:
1. Do not perform any package deskew compensation for either device.
2.
Compile a Quartus Prime design for device A with the Package Deskew option
disabled, and ensure that all board skews are entered accurately.
3.
Verify that the Report DDR timing report meets your timing requirements.
4. Compile a Quartus Prime design for device B with the Package Deskew option
disabled, and ensure that all board skews are entered accurately.
5.
Verify that the Report DDR timing report meets your timing requirements.
2.8.7 Package Deskew for RLDRAM II and RLDRAM 3
You should follow Intel's package deskew guidance if you are using Arria 10, Stratix
10, or Stratix V devices.
For more information on package deskew, refer to Package Deskew.
Related Links
Package Deskew
2.9 Document Revision History
External Memory Interface Handbook Volume 2: Design Guidelines
127
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Date
May 2017
Version
2017.05.08
Changes
•
•
•
•
Added Channel Signal Integrity Measurement section.
Added Stratix 10 to several sections.
Removed QDR-IV future support note from Package Deskew
Recommendations for Arria 10 and Stratix 10 Devices section.
Rebranded as Intel.
October 2016
2016.10.31
Maintenance release.
May 2016
2016.05.02
•
•
•
•
November 2015
2015.11.02
•
•
•
•
•
•
•
Minor change to Clock Routing description in the DDR2 SDRAM Layout
Guidelines table in Layout Guidelines for DDR2 SDRAM Interface.
Added maximum length of the first SDRAM to the last SDRAM for clock
routing and address and command routing for DDR4, in Layout Guidelines
for DDR3 and DDR4 SDRAM Interfaces.
Removed DRAM Termination Guidance from Layout Guidelines for DDR3
and DDR4 SDRAM Interfaces.
Added DDR4 support to Length Matching Rules.
Minor additions to procedure steps in DQ/DQS/DM Deskew and Address
and Command Deskew.
Added reference to Micron Technical Note in Layout Guidelines for DDR3
and DDR4 SDRAM Interfaces.
Changed title of Board Termination for DDR2 SDRAM to Termination for
DDR2 SDRAM and Board Termination for DDR3 SDRAM to Termination for
DDR3 SDRAM.
Changed title of Leveling and Dynamic ODT to Leveling and Dynamic
Termination.
Added DDR4 support in Dynamic ODT.
Removed topics pertaining to older device families.
Changed instances of Quartus II to Quartus Prime.
May 2015
2015.05.04
Maintenance release.
December 2014
2014.12.15
•
•
Added MAX 10 to On-Chip Termination topic.
Added MAX 10 to Termination Recommendations table in Recommended
Termination Schemes topic.
August 2014
2014.08.15
•
Added Arria V Soc and Cyclone V SoC devices to note in Leveling and
Dynamic ODT section.
Added DDR4 to Read and Write Leveling section.
Revised text in On-Chip Termination section.
Added text to note in Board Termination for DDR3 SDRAM section.
Added Layout Approach information in the DDR3 and DDR4 on Arria 10
Devices section.
Recast expressions of length-matching measurements throughout DDR2
SDRAM Layout Guidelines table.
Made several changes to DDR3 and DDR4 SDRAM Layout Guidelines table:
— Added Spacing Guidelines section.
— Removed millimeter approximations from lengths expressed in
picoseconds.
— Revised Guidelines for Clock Routing, Address and Command Routing,
and DQ, DM, and DQS Routing Rules sections.
Added Spacing Guidelines information to Design Layout Guidelines
section.
•
•
•
•
•
•
•
December 2013
2013.12.16
•
•
•
•
•
Review and minor updates of content.
Consolidated General Layout Guidelines.
Added DDR3 and DDR4 information for Arria 10 devices.
Updated chapter title to include DDR4 support.
Removed references to ALTMEMPHY.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
128
2 DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines
Date
November 2012
Version
5.0
Changes
•
•
•
Removed references to Cyclone III and Cyclone IV devices.
Removed references to Stratix II devices.
Corrected Vtt to Vdd in Memory Clocks for DDR3 SDRAM UDIMM section.
•
Updated Layout Guidelines for DDR2 SDRAM Interface and Layout
Guidelines for DDR3 SDRAM Interface.
Added LRDIMM support.
Added Package Deskew section.
•
•
June 2012
4.1
Added Feedback icon.
November 2011
4.0
Added Arria V and Cyclone V information.
June 2011
3.0
•
•
Merged DDR2 and DDR3 chapters to DDR2 and DDR3 SDRAM Interface
Termination and Layout Guidelines and updated with leveling information.
Added Stratix V information.
December 2010
2.1
Added DDR3 SDRAM Interface Termination, Drive Strength, Loading, and
Board Layout Guidelines chapter with Stratix V information.
July 2010
2.0
Updated Arria II GX information.
April 2010
1.0
Initial release.
External Memory Interface Handbook Volume 2: Design Guidelines
129
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design
Guidelines
The following topics describe guidelines for implementing dual unbuffered DIMM
(UDIMM) DDR2 and DDR3 SDRAM interfaces.
The following topics discuss the impact on signal integrity of the data signal with the
following conditions in a dual-DIMM configuration:
•
Populating just one slot versus populating both slots
•
Populating slot 1 versus slot 2 when only one DIMM is used
•
On-die termination (ODT) setting of 75-ohm versus an ODT setting of 150-ohm
For detailed information about a single-DIMM DDR2 SDRAM interface, refer to the
DDR2 and DDR3 SDRAM Board Design Guidelines chapter.
Related Links
DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines on page 83
The following topics provide guidelines for improving the signal integrity of your
system and for successfully implementing a DDR2, DDR3, or DDR4 SDRAM
interface on your system.
3.1 General Layout Guidelines
The following table lists general board design layout guidelines. These guidelines are
Intel recommendations, and should not be considered as hard requirements. You
should perform signal integrity simulation on all the traces to verify the signal integrity
of the interface. You should extract the slew rate and propagation delay information,
enter it into the IP and compile the design to ensure that timing requirements are
met.
Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus
and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other
countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in
accordance with Intel's standard warranty, but reserves the right to make changes to any products and services
at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any
information, product, or service described herein except as expressly agreed to in writing by Intel. Intel
customers are advised to obtain the latest version of device specifications before relying on any published
information and before placing orders for products or services.
*Other names and brands may be claimed as the property of others.
ISO
9001:2008
Registered
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
Table 35.
General Layout Guidelines
Parameter
Impedance
Guidelines
•
•
Decoupling Parameter
•
•
•
•
•
Power
•
•
•
•
General Routing
All unused via pads must be removed, because they cause unwanted
capacitance.
Trace impedance plays an important role in the signal integrity. You must
perform board level simulation to determine the best characteristic impedance
for your PCB. For example, it is possible that for multi rank systems 40 ohms
could yield better results than a traditional 50 ohm characteristic impedance.
Use 0.1 uF in 0402 size to minimize inductance
Make VTT voltage decoupling close to termination resistors
Connect decoupling caps between VTT and ground
Use a 0.1 uF cap for every other VTT pin and 0.01 uF cap for every VDD and
VDDQ pin
Verify the capacitive decoupling using the Intel Power Distribution Network
Design Tool
Route GND and VCC as planes
Route VCCIO for memories in a single split plane with at least a 20-mil
(0.020 inches, or 0.508 mm) gap of separation
Route VTT as islands or 250-mil (6.35-mm) power traces
Route oscillators and PLL power as islands or 100-mil (2.54-mm) power traces
All specified delay matching requirements include PCB trace delays, different layer
propagation velocity variance, and crosstalk. To minimize PCB layer propogation
variance, Intel recommends that signals from the same net group always be
routed on the same layer.
• Use 45° angles (not 90° corners)
• Avoid T-Junctions for critical nets or clocks
• Avoid T-junctions greater than 250 mils (6.35 mm)
• Disallow signals across split planes
• Restrict routing other signals close to system reset signals
• Avoid routing memory signals closer than 0.025 inch (0.635 mm) to PCI or
system clocks
Related Links
Power Distribution Network Design Tool
3.2 Dual-Slot Unbuffered DDR2 SDRAM
This topic describes guidelines for implementing a dual slot unbuffered DDR2 SDRAM
interface, operating at up to 400-MHz and 800-Mbps data rates.
The following figure shows a typical DQS, DQ, and DM signal topology for a dual-DIMM
interface configuration using the ODT feature of the DDR2 SDRAM components.
External Memory Interface Handbook Volume 2: Design Guidelines
131
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
Figure 37.
Dual-DIMM DDR2 SDRAM Interface Configuration
V TT
R T =54Ω
Bo ard T race
DDR2 SDRAM
DIMMs
(Receiver)
Slot 1
FPGA
(Driver)
Slot 2
Bo ard T race
Bo ard T race
®
The simulations in this section use a Stratix II device-based board. Because of
limitations of this FPGA device family, simulations are limited to 266 MHz and
533 Mbps so that comparison to actual hardware results can be directly made.
3.2.1 Overview of ODT Control
When there is only a single-DIMM on the board, the ODT control is relatively
straightforward. During write to the memory, the ODT feature of the memory is turned
on; during read from the memory, the ODT feature of the memory is turned off.
However, when there are multiple DIMMs on the board, the ODT control becomes
more complicated.
With a dual-DIMM interface on the system, the controller has different options for
turning the memory ODT on or off during read or write. The following table lists the
DDR2 SDRAM ODT control during write to the memory. These DDR2 SDRAM ODT
controls are recommended by Samsung Electronics. The JEDEC DDR2 specification
was updated to include optional support for RTT(nominal) = 50-ohm.
For more information about the DDR2 SDRAM ODT controls recommended by
Samsung, refer to the Samsung DDR2 Application Note: ODT (On Die Termination)
Control.
Table 36.
Slot 1
DDR2 SDRAM ODT Control—Writes
(2)
Slot 2
(2)
Write To
FPGA
(1)
Module in Slot 1
Rank 1
DR
SR
DR
SR
Rank 2
Module in Slot 2
Rank 3
Rank 4
Slot 1
Series 50ohms
Infinite
Infinite
75 or 50ohm
Infinite
Slot 2
Series 50ohms
75 or 50ohm
Infinite
Infinite
Infinite
Slot 1
Series 50ohms
Infinite
Unpopulated
75 or 50ohm
Unpopulated
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
132
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
Slot 1
(2)
Slot 2
(2)
Write To
FPGA
Module in Slot 1
Rank 1
Rank 2
Module in Slot 2
Rank 3
Rank 4
Slot 2
Series 50ohms
75 or 50ohm
Unpopulated
Infinite
Unpopulated
DR
Empty
Slot 1
Series 50ohms
150-ohm
Infinite
Unpopulated
Unpopulated
Empty
DR
Slot 2
Series 50ohms
Unpopulated
Unpopulated
150-ohm
Infinite
SR
Empty
Slot 1
Series 50ohms
150-ohm
Unpopulated
Unpopulated
Unpopulated
Empty
SR
Slot 2
Series 50ohms
Unpopulated
Unpopulated
150-ohm
Unpopulated
Notes to Table:
1. For DDR2 at 400 MHz and 533 Mbps = 75-ohm; for DDR2 at 667 MHz and 800 Mbps = 50-ohm.
2. SR = single ranked; DR = dual ranked.
Table 37.
Slot 1
DDR2 SDRAM ODT Control—Reads
(2)
Slot 2
(2)
Read From
FPGA
(1)
Module in Slot 1
Rank 1
DR
SR
DR
SR
Rank 2
Module in Slot 2
Rank 3
Rank 4
Slot 1
Parallel 50ohms
Infinite
Infinite
75 or 50ohm
Infinite
Slot 2
Parallel 50ohms
75 or 50ohm
Infinite
Infinite
Infinite
Slot 1
Parallel 50ohms
Infinite
Unpopulated
75 or 50ohm
Unpopulated
Slot 2
Parallel 50ohms
75 or 50ohm
Unpopulated
Infinite
Unpopulated
DR
Empty
Slot 1
Parallel 50ohms
Infinite
Infinite
Unpopulated
Unpopulated
Empty
DR
Slot 2
Parallel 50ohms
Unpopulated
Unpopulated
Infinite
Infinite
SR
Empty
Slot 1
Parallel 50ohms
Infinite
Unpopulated
Unpopulated
Unpopulated
Empty
SR
Slot 2
Parallel 50ohms
Unpopulated
Unpopulated
Infinite
Unpopulated
Notes to Table:
1. For DDR2 at 400 MHz and 533 Mbps = 75-ohm; for DDR2 at 667 MHz and 800 Mbps = 50-ohm.
2. SR = single ranked; DR = dual ranked.
3.2.2 DIMM Configuration
Although populating both memory slots is common in a dual-DIMM memory system,
there are some instances when only one slot is populated.
For example, some systems are designed to have a certain amount of memory initially
and as applications get more complex, the system can be easily upgraded to
accommodate more memory by populating the second memory slot without redesigning the system. The following topics discuss a dual-DIMM system where the
External Memory Interface Handbook Volume 2: Design Guidelines
133
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
dual-DIMM system only has one slot populated at one time and a dual-DIMM system
where both slots are populated. ODT controls recommended by memory vendors, as
well as other possible ODT settings are evaluated for usefulness in an FPGA system.
3.2.3 Dual-DIMM Memory Interface with Slot 1 Populated
The following topics focus on a dual-DIMM memory interface where slot 1 is populated
and slot 2 is unpopulated.
These topics examine the impact on the signal quality due to an unpopulated DIMM
slot and compares it to a single-DIMM memory interface.
3.2.3.1 FPGA Writing to Memory
In the DDR2 SDRAM, the ODT feature has two settings: 150-ohms and 75-ohms.
The recommended ODT setting for a dual DIMM configuration with one slot occupied is
150-ohm.
Note:
On DDR2 SDRAM devices running at 333 MHz/667 Mbps and above, the ODT feature
supports an additional setting of 50-ohm.
Refer to the respective memory decathlete for additional information about the ODT
settings in DDR2 SDRAM devices.
3.2.3.2 Write to Memory Using an ODT Setting of 150-ohm
The following figure shows a double parallel termination scheme (Class II) using ODT
on the memory with a memory-side series resistor when the FPGA is writing to the
memory using a 25-ohm OCT drive strength setting on the FPGA.
Figure 38.
Double Parallel Termination Scheme (Class II) Using ODT on DDR2 SDRAM
DIMM with Memory-Side Series Resistor
DDR 2 DIMM
FPGA
V TT = 0.9V
25 Ω
DDR2 Component
Driver
Driver
R T= 54 Ω
R S = 22 Ω
300 Ω/
150 Ω
R eceiver
50Ω
VREF
3" Trace Length
r
VREF = 0.9V
300 Ω/
150 Ω
Related Links
DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines on page 83
The following topics provide guidelines for improving the signal integrity of your
system and for successfully implementing a DDR2, DDR3, or DDR4 SDRAM
interface on your system.
External Memory Interface Handbook Volume 2: Design Guidelines
134
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
3.2.3.3 Reading from Memory
During read from the memory, the ODT feature is turned off. Thus, there is no
difference between using an ODT setting of 150-ohm and 75-ohm. As such, the
termination scheme becomes a single parallel termination scheme (Class I) where
there is an external resistor on the FPGA side and a series resistor on the memory
side as shown in the following figure.
Figure 39.
Single Parallel Termination Scheme (Class I) Using External Resistor and
Memory-Side Series Resistor
FPGA
Driver
25 Ω
DDR2 DIMM
DDR2 Component
VTT = 0.9 V
300 Ω/
150 Ω
RT = 54 Ω
Receiver
50 Ω
VREF = 0.9 V
Receiver
RS = 22 Ω
3” Trace VREF
Length
Driver
300 Ω/
150 Ω
Related Links
DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines on page 83
The following topics provide guidelines for improving the signal integrity of your
system and for successfully implementing a DDR2, DDR3, or DDR4 SDRAM
interface on your system.
3.2.4 Dual-DIMM with Slot 2 Populated
The following topics focus on a dual-DIMM memory interface where slot 2 is populated
and slot 1 is unpopulated. Specifically, these topics discuss the impact of location of
the DIMM on the signal quality.
3.2.4.1 FPGA Writing to Memory
The following topics explore the differences between populating slot 1 and slot 2 of the
dual-DIMM memory interface.
Previous topics focused on the dual-DIMM memory interface where slot 1 is populated
resulting in the memory being located closer to the FPGA. When slot 2 is populated,
the memory is located further away from the FPGA, resulting in additional trace length
that potentially affects the signal quality seen by the memory. The following topics
explore the differences between populating slot 1 and slot 2 of the dual-DIMM memory
interface.
3.2.4.2 Write to Memory Using an ODT Setting of 150-ohm
External Memory Interface Handbook Volume 2: Design Guidelines
135
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
The following figure shows the double parallel termination scheme (Class II) using
ODT on the memory with the memory-side series resistor when the FPGA is writing to
the memory using a 25-ohm OCT drive strength setting on the FPGA.
Figure 40.
Double Parallel Termination Scheme (Class II) Using ODT on DDR2 SDRAM
DIMM with Memory-side Series Resistor
DDR 2 DIMM
FPGA
V TT = 0.9V
25 Ω
DDR2 Component
Driver
Driver
R T= 54 Ω
R S = 22 Ω
300 Ω/
150 Ω
R eceiver
50Ω
3" Trace Length
VREF
VREF = 0.9V
r
300 Ω/
150 Ω
3.2.4.3 Reading from Memory
During read from memory, the ODT feature is turned off, thus there is no difference
between using an ODT setting of 150-ohm and 75-ohm. As such, the termination
scheme becomes a single parallel termination scheme (Class I) where there is an
external resistor on the FPGA side and a series resistor on the memory side, as shown
in the following figure.
Figure 41.
Single Parallel Termination Scheme (Class I) Using External Resistor and
Memory-Side Series Resistor
FPGA
Driver
25 Ω
DDR2 DIMM
DDR2 Component
VTT = 0.9 V
300 Ω/
150 Ω
RT = 54 Ω
Receiver
50 Ω
VREF = 0.9 V
3” Trace VREF
Length
Driver
Receiver
RS = 22 Ω
300 Ω/
150 Ω
3.2.5 Dual-DIMM Memory Interface with Both Slot 1 and Slot 2 Populated
The following topics focus on a dual-DIMM memory interface where both slot 1 and
slot 2 are populated. As such, you can write to either the memory in slot 1 or the
memory in slot 2.
External Memory Interface Handbook Volume 2: Design Guidelines
136
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
3.2.5.1 FPGA Writing to Memory
The following topics explore the use of the 150-ohm setting and compares the results
to that of the recommended 75-ohm.
In Table 5–1, the recommended ODT setting for a dual DIMM configuration with both
slots occupied is 75-ohm. Because there is an option for an ODT setting of 150-ohm,
this section explores the usage of the 150-ohm setting and compares the results to
that of the recommended 75-ohm.
3.2.5.2 Write to Memory in Slot 1 Using an ODT Setting of 75-ohm
The following figure shows the double parallel termination scheme (Class II) using
ODT on the memory with the memory-side series resistor when the FPGA is writing to
the memory using a 25-ohm OCT drive strength setting on the FPGA. In this scenario,
the FPGA is writing to the memory in slot 1 and the ODT feature of the memory at slot
2 is turned on.
Figure 42.
Double Parallel Termination Scheme (Class II) Using ODT on DDR2 SDRAM
DIMM with a Memory-Side Series Resistor
Slot 1
FPGA
DDR2 DIMM
VTT = 0.9V
Driver
25W
DDR2 Component
Driver
RT= 54W
Receiver
RS = 22W
50W
3" Trace Length
VREF
50W
VREF
300W/
150W
Receiver
300W/
150W
Slot 2
DDR2 DIMM
DDR2 Component
Driver
RS = 22W
VREF= 0.9V
300W/
150W
Receiver
300W/
150W
3.2.5.3 Reading From Memory
In Table 5–2, the recommended ODT setting for a dual-DIMM configuration with both
slots occupied is to turn on the ODT feature using a setting of 75-ohm on the slot that
is not read from. As there is an option for an ODT setting of 150-ohm, this section
explores the usage of the 150-ohm setting and compares the results to that of the
recommended 75-ohm.
External Memory Interface Handbook Volume 2: Design Guidelines
137
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
Read From Memory in Slot 1 Using an ODT Setting of 75-ohms on Slot 2
The following figure shows the double parallel termination scheme (Class II) using
ODT on the memory with the memory-side series resistor when the FPGA is reading
from the memory using a full drive strength setting on the memory. In this scenario,
the FPGA is reading from the memory in slot 1 and the ODT feature of the memory at
slot 2 is turned on.
Figure 43.
Double Parallel Termination Scheme (Class II) Using External Resistor and
Memory-Side Series Resistor and ODT Feature Turned On
Slot 1
FPGA
DDR2 DIMM
VTT = 0.9V
25W
DDR2 Component
Driver
Driver
RT= 54W
Receiver
RS = 22W
50W
VREF
Receiver
300W/
150W
50W
VREF
3" Trace Length
300W/
150W
Slot 2
DDR2 DIMM
DDR2 Component
Driver
RS = 22W
VREF
300W/
150W
Receiver
300W/
150W
Read From Memory in Slot 2 Using an ODT Setting of 75-ohms on Slot 1
In this scenario, the FPGA is reading from the memory in slot 2 and the ODT feature of
the memory at slot 1 is turned on.
External Memory Interface Handbook Volume 2: Design Guidelines
138
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
Figure 44.
Double Parallel Termination Scheme (Class II) Using External Resistor and a
Memory-Side Series Resistor and ODT Feature Turned On
Slot 1
FPGA
DDR2 DIMM
VTT = 0.9V
Driver
25W
DDR2 Component
Driver
RT= 54W
Receiver
RS = 22W
50W
3" Trace Length
Receiver
VREF
150W/
300W
50W
VREF= 0.9V
150W/
300W
Slot 2
DDR2 DIMM
DDR2 Component
Driver
RS = 22W
VREF
150W/
300W
Receiver
150W/
300W
3.2.6 Dual-DIMM DDR2 Clock, Address, and Command Termination and
Topology
The address and command signals on a DDR2 SDRAM interface are unidirectional
signals that the FPGA memory controller drives to the DIMM slots. These signals are
always Class-I terminated at the memory end of the line, as shown in the following
figure.
Always place DDR2 SDRAM address and command Class-I termination after the last
DIMM. The interface can have one or two DIMMs, but never more than two DIMMs
total.
Multi DIMM DDR2 Address and Command Termination Topology
V TT
R P = 47 W
Slot 1
FPGA
(Driver)
Board Trace A
DDR2 SDRAM
DIMMs
(Receiver)
Slot 2
Board Trace C
Figure 45.
Board Trace B
External Memory Interface Handbook Volume 2: Design Guidelines
139
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
In the above figure, observe the following points:
•
Board trace A = 1.9 to 4.5 inches (48 to 115 mm)
•
Board trace B = 0.425 inches (10.795 mm)
•
Board trace C = 0.2 to 0.55 inches (5 to 13 mm)
•
Total of board trace A + B + C = 2.5 to 5 inches (63 to 127 mm)
•
RP = 36 to 56-ohm
•
Length match all address and command signals to +250 mils (+5 mm) or +/– 50
ps of memory clock length at the DIMM.
You may place a compensation capacitor directly before the first DIMM slot 1 to
improve signal quality on the address and command signal group. If you fit a
capacitor, Intel recommends a value of 24 pF.
For more information, refer to Micron TN47-01.
3.2.7 Control Group Signals
The control group of signals: chip select CS#, clock enable CKE, and ODT are always
1T regardless of whether you implement a full-rate or half-rate design.
As the signals are also SDR, the control group signals operate at a maximum
frequency of 0.5 × the data rate. For example, in a 400-MHz design, the maximum
control group frequency is 200 MHz.
3.2.8 Clock Group Signals
Depending on the specific form factor, DDR2 SDRAM DIMMs have two or three
differential clock pairs, to ensure that the loading on the clock signals is not excessive.
The clock signals are always terminated on the DIMMs and hence no termination is
required on your PCB.
Additionally, each DIMM slot is required to have its own dedicated set of clock signals.
Hence clock signals are always point-to-point from the FPGA PHY to each individual
DIMM slot. Individual memory clock signals should never be shared between two
DIMM slots.
A typical two slot DDR2 DIMM design therefore has six differential memory clock pairs
—three to the first DIMM and three to the second DIMM. All six memory clock pairs
must be delay matched to each other to ±25 mils (±0.635 mm) and ±10 mils
(±0.254 mm) for each CLK to CLK# signal.
You may place a compensation capacitor between each clock pair directly before the
DIMM connector, to improve the clock slew rates. As FPGA devices have fully
programmable drive strength and slew rate options, this capacitor is usually not
required for FPGA design. However, Intel advises that you simulate your specific
implementation to ascertain if this capacitor is required or not. If fitted the best value
is typically 5 pF.
3.3 Dual-Slot Unbuffered DDR3 SDRAM
The following topics detail the system implementation of a dual slot unbuffered DDR3
SDRAM interface, operating at up to 400 MHz and 800 Mbps data rates.
External Memory Interface Handbook Volume 2: Design Guidelines
140
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
The following figure shows a typical DQS, DQ, and DM, and address and command
signal topology for a dual-DIMM interface configuration, using the ODT feature of the
DDR3 SDRAM components combined with the dynamic OCT features available in
Stratix III and Stratix IV devices.
Figure 46.
Multi DIMM DDR3 DQS, DQ, and DM, and Address and Command Termination
Topology
DDR3 SDRAM
DIMMs
Slot 1
FPGA
(Driver)
Board Trace A
Slot 2
Board Trace B
In the above figure, observe the following points:
•
Board trace A = 1.9 to 4.5 inches (48 to 115 mm)
•
Board trace B = 0.425 inches (10.795 mm)
•
This topology to both DIMMs is accurate for DQS, DQ, and DM, and address and
command signals
•
This topology is not correct for CLK and CLK# and control group signals (CS#,
CKE, and ODT), which are always point-to-point single rank only.
3.3.1 Comparison of DDR3 and DDR2 DQ and DQS ODT Features and
Topology
DDR3 and DDR2 SDRAM systems are quite similar. The physical topology of the data
group of signals may be considered nearly identical.
The FPGA end (driver) I/O standard changes from SSTL18 for DDR2 to SSTL15 for
DDR3, but all other OCT settings are identical. DDR3 offers enhanced ODT options for
termination and drive-strength settings at the memory end of the line.
For more information, refer to the DDR3 SDRAM ODT matrix for writes and the DDR3
SDRAM ODT matrix for reads tables in the DDR2 and DDR3 SDRAM Board Design
Guidelines chapter.
Related Links
DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines on page 83
The following topics provide guidelines for improving the signal integrity of your
system and for successfully implementing a DDR2, DDR3, or DDR4 SDRAM
interface on your system.
3.3.2 Dual-DIMM DDR3 Clock, Address, and Command Termination and
Topology
One significant difference between DDR3 and DDR2 DIMM based interfaces is the
address, command and clock signals. DDR3 uses a daisy chained based architecture
when using JEDEC standard modules.
External Memory Interface Handbook Volume 2: Design Guidelines
141
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
The address, command, and clock signals are routed on each module in a daisy chain
and feature a fly-by termination on the module. Impedance matching is required to
make the dual-DIMM topology work effectively—40 to 50-ohm traces should be
targeted on the main board.
3.3.2.1 Address and Command Signals
Two UDIMMs result in twice the effective load on the address and command signals,
which reduces the slew rate and makes it more difficult to meet setup and hold timing
(tIS and tIH). However, address and command signals operate at half the interface rate
and are SDR. Hence a 400-Mbps data rate equates to an address and command
fundamental frequency of 100 MHz.
3.3.2.2 Control Group Signals
The control group signals (chip Select CS#, clock enable CKE, and ODT) are only ever
single rank. A dual-rank capable DDR3 DIMM slot has two copies of each signal, and a
dual-DIMM slot interface has four copies of each signal.
The signal quality of these signals is identical to a single rank case. The control group
of signals, are always 1T regardless of whether you implement a full-rate or half-rate
design. As the signals are also SDR, the control group signals operate at a maximum
frequency of 0.5 × the data rate. For example, in a 400 MHz design, the maximum
control group frequency is 200 MHz.
3.3.2.3 Clock Group Signals
Like the control group signals, the clock signals in DDR3 SDRAM are only ever single
rank loaded. A dual-rank capable DDR3 DIMM slot has two copies of the signal, and a
dual-slot interface has four copies of the mem_clk and mem_clk_n signals.
For more information about a DDR3 two-DIMM system design, refer to Micron
TN-41-08: DDR3 Design Guide for Two-DIMM Systems.
3.3.3 FPGA OCT Features
Many FPGA devices offer OCT. Depending on the chosen device family, series (output),
parallel (input) or dynamic (bidirectional) OCT may be supported.
For more information specific to your device family, refer to the respective I/O
features chapter in the relevant device handbook.
Use series OCT in place of the near-end series terminator typically used in both Class I
or Class II termination schemes that both DDR2 and DDR3 type interfaces use.
Use parallel OCT in place of the far-end parallel termination typically used in Class I
termination schemes on unidirectional input only interfaces. For example, QDR-II type
interfaces, when the FPGA is at the far end.
Use dynamic OCT in place of both the series and parallel termination at the FPGA end
of the line. Typically use dynamic OCT for DQ and DQS signals in both DDR2 and
DDR3 type interfaces. As the parallel termination is dynamically disabled during
writes, the FPGA driver only ever drives into a Class I transmission line. When
combined with dynamic ODT at the memory, a truly dynamic Class I termination
scheme exists where both reads and writes are always fully Class I terminated in each
External Memory Interface Handbook Volume 2: Design Guidelines
142
3 Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines
direction. Hence, you can use a fully dynamic bidirectional Class I termination scheme
instead of a static discretely terminated Class II topology, which saves power, printed
circuit board (PCB) real estate, and component cost.
3.3.3.1 Arria V, Cyclone V, Stratix III, Stratix IV, and Stratix V Devices
®
®
Arria V, Cyclone V, Stratix III, Stratix IV, and Stratix V devices feature full dynamic
OCT termination capability, Intel advises that you use this feature combined with the
SDRAM ODT to simplify PCB layout and save power.
3.3.3.2 Arria II GX Devices
Arria II GX devices do not support dynamic OCT. Intel recommends that you use
series OCT with SDRAM ODT. Use parallel discrete termination at the FPGA end of the
line when necessary.
For more information, refer to the DDR2 and DDR3 SDRAM Board Design Guidelines
chapter.
Related Links
DDR2, DDR3, and DDR4 SDRAM Board Design Guidelines on page 83
The following topics provide guidelines for improving the signal integrity of your
system and for successfully implementing a DDR2, DDR3, or DDR4 SDRAM
interface on your system.
3.4 Document Revision History
Date
May 2017
Version
2017.5.08
Changes
Rebranded as Intel.
October 2016
2016.10.31
Maintenance release.
May 2016
2016.05.02
Maintenance release.
November 2015
2015.11.02
Maintenance release.
May 2015
2015.05.04
Maintenance release.
December 2014
2014.12.15
Maintenance release.
August 2014
2014.08.15
Removed Address and Command Signals section from Dual-DIMM DDR2
Clock, Address, and Command Termination and Topology
December 2013
2013.12.16
•
•
•
•
Reorganized content.
Consolidated General Layout Guidelines.
Removed references to ALTMEMPHY.
Removed references to Stratix II devices.
June 2012
4.1
Added Feedback icon.
November 2011
4.0
Added Arria V and Cyclone V information.
June 2011
3.0
Added Stratix V information.
December 2010
2.1
Maintenance update.
July 2010
2.0
Updated Arria II GX information.
April 2010
1.0
Initial release.
External Memory Interface Handbook Volume 2: Design Guidelines
143
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
The following topics provide guidelines to improve your system's signal integrity and
to successfully implement an LPDDR2 or LPDDR3 SDRAM interface in your system.
4.1 LPDDR2 Guidance
The LPDDR2 SDRAM Controller with UniPHY intellectual property (IP) enables you to
®
®
implement LPDDR2 SDRAM interfaces with Arria V and Cyclone V devices.
The following topics focus on key factors that affect signal integrity:
•
I/O standards
•
LPDDR2 configurations
•
Signal terminations
•
Printed circuit board (PCB) layout guidelines
I/O Standards
LPDDR2 SDRAM interface signals use HSUL-12 JEDEC I/O signaling standards, which
provide low power and low emissions. The HSUL-12 JEDEC I/O standard is mainly for
point-to-point unterminated bus topology. This standard eliminates the need for
external series or parallel termination resistors in LPDDR2 SDRAM implementation.
With this standard, termination power is greatly reduced and programmable drive
strength is used to match the impedance.
To select the most appropriate standard for your interface, refer to the the Device
Datasheet for Arria V Devices chapter in the Arria V Device Handbook, or the Device
Datasheet for Cyclone V Devices chapter in the Cyclone V Device Handbook.
Related Links
•
Arria V Device Datasheet
•
Cyclone V Device Datasheet
4.1.1 LPDDR2 SDRAM Configurations
The LPDDR2 SDRAM Controller with UniPHY IP supports interfaces for LPDDR2 SDRAM
with a single device, and multiple devices up to a maximum width of 32 bits.
Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus
and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other
countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in
accordance with Intel's standard warranty, but reserves the right to make changes to any products and services
at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any
information, product, or service described herein except as expressly agreed to in writing by Intel. Intel
customers are advised to obtain the latest version of device specifications before relying on any published
information and before placing orders for products or services.
*Other names and brands may be claimed as the property of others.
ISO
9001:2008
Registered
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
When using multiple devices, a balanced-T topology is recommended for the signal
connected from single point to multiple point, to maintain equal flight time.
You should connect a 200 ohm differential termination resistor between CK/CK# in
multiple device designs as shown in the second figure below, to maintain an effective
resistance of 100 ohms.
You should also simulate your multiple device design to obtain the optimum drive
strength settings and ensure correct operation.
The following figure shows the main signal connections between the FPGA and a single
LPDDR2 SDRAM component.
Figure 47.
Configuration with a Single LPDDR2 SDRAM Component
ZQ
LPDDR2 SDRAM Device
DQS/DQS# DQ
DM
CK/CK
CA
CKE
RZQ
CS
4.7K
(1)
FGPA
DQS/DQS#
DQ
DM
CK/CK
COMMAND ADDRESS
CKE
CS
Note to Figure:
1. Use external discrete termination, as shown for CKE, but you may require a pulldown resistor to GND. Refer to the LPDDR2 SDRAM device data sheet for more
information about LPDDR2 SDRAM power-up sequencing.
External Memory Interface Handbook Volume 2: Design Guidelines
145
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
The following figure shows the differential resistor placement for CK/CK# for multipoint designs.
Figure 48.
CK Differential Resistor Placement for Multi Point Design
CK
Trace
Length 3
200 W
CK#
LPDDR2
Device 1
Trace
Length 2
FPGA
CK
Trace Length 1
CK#
Trace
Length 2
200 W
Trace
Length 3
CK
CK#
LPDDR2
Device 2
Note to Figure:
1.
Place 200-ohm differential resistors near the memory devices at the end of the
last board trace segments.
External Memory Interface Handbook Volume 2: Design Guidelines
146
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
The following figure shows the detailed balanced topology recommended for the
address and command signals in the multi-point design.
Figure 49.
Address Command Balanced-T Topology
TL2
FPGA
TL1
LPDDR2 Memory
. (1)
TL2
LPDDR2 Memory
Notes to Figure:
1. Split the trace close to the memory devices to minimize signal reflections and
impedence nonuniformity.
2. Keep the TL2 traces as short as possible, so that the memory devices appear as a
single load.
4.1.2 OCT Signal Terminations for Arria V and Cyclone V Devices
Arria V and Cyclone V devices offer OCT technology. The following table lists the
extent of OCT support for each device.
Table 38.
On-Chip Termination Schemes
Termination Scheme
I/O Standard
Arria V and Cyclone V
On-Chip Series Termination without Calibration
HSUL-12
34/40/48/60/80
On-Chip Series Termination with Calibration
HSUL-12
34/40/48/60/80
On-chip series (RS) termination supports output buffers, and bidirectional buffers only
when they are driving output signals. LPDDR2 SDRAM interfaces have bidirectional
data paths. The UniPHY IP uses series OCT for memory writes but no parallel OCT for
memory reads because Arria V and Cyclone V support only on-chip series termination
in the HSUL-12 I/O standard.
For Arria V and Cyclone V devices, the HSUL-12 I/O calibrated terminations are
calibrated against 240 ohm 1% resistors connected to the RZQ pins in an I/O bank
with the same VCCIO as the LPDDR2 interface.
External Memory Interface Handbook Volume 2: Design Guidelines
147
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
Calibration occurs at the end of the device configuration.
LPDDR2 SDRAM memory components have a ZQ pin which connects through a resistor
RZQ (240 ohm) to ground. The output signal impedances for LPDDR2 SDRAM are
34.3 ohm, 40 ohm, 48 ohm, 60 ohm, 80 ohm, and 120 ohm. The output signal
impedance is set by mode register during initialization. Refer to the LPDDR2 SDRAM
device data sheet for more information.
For information about OCT, refer to the I/O Features in Arria V Devices chapter in the
Arria V Device Handbook, or the I/O Features in Cyclone V Devices chapter in the
Cyclone V Device Handbook.
The following section shows HyperLynx simulation eye diagrams to demonstrate signal
termination options. Intel strongly recommends signal terminations to optimize signal
integrity and timing margins, and to minimize unwanted emissions, reflections, and
crosstalk.
All of the eye diagrams shown in this section are for a 50 ohm trace with a
propagation delay of 509 ps which is approximately a 2.8-inch trace on a standard
FR4 PCB. The signal I/O standard is HSUL-12.
The eye diagrams in this section show the best case achievable and do not take into
account PCB vias, crosstalk, and other degrading effects such as variations in the PCB
structure due to manufacturing tolerances.
Note:
Simulate your design to ensure correct operation.
Related Links
•
I/O Features in Arria V Devices
•
I/O Features in Cyclone V Devices
4.1.2.1 Outputs from the FPGA to the LPDDR2 Component
The following output signals are from the FPGA to the LPDDR2 SDRAM component:
•
write data (DQ)
•
data mask (DM)
•
data strobe (DQS/DQS#)
•
command address
•
command (CS, and CKE)
•
clocks (CK/CK#)
No far-end memory termination is needed when driving output signals from FPGA to
LPDDR2 SDRAM. Cyclone V and Arria V devices offer the OCT series termination for
impedance matching.
4.1.2.2 Input to the FPGA from the LPDDR2 SDRAM Component
The LPDDR2 SDRAM component drives the following input signals into the FPGA:
•
read data
•
DQS
External Memory Interface Handbook Volume 2: Design Guidelines
148
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
LPDDR2 SDRAM provides the flexibility to adjust drive strength to match the
impedance of the memory bus, eliminating the need for termination voltage (VTT) and
series termination resistors.
The programmable drive strength options are 34.3 ohms, 40 ohms (default), 48 ohms,
60 ohms, 80 ohms, and 120 ohms. You must perform board simulation to determine
the best option for your board layout.
Note:
By default, LPDDR2 SDRAM UniPHY IP uses 40 ohm drive strength.
4.1.2.3 Termination Schemes
The following table lists the recommended termination schemes for major LPDDR2
SDRAM memory interface signals.
These signals include data (DQ), data strobe (DQS), data mask (DM), clocks (CK, and
CK#), command address (CA), and control (CS#, and CKE).
Table 39.
Termination Recommendations for Arria V and Cyclone V Devices
Signal Type
HSUL-12 Standard
(1) (2)
Memory End Termination
DQS/DQS#
R34 CAL
ZQ40
Data (Write)
R34 CAL
–
Data (Read)
–
ZQ40
Data Mask (DM)
R34 CAL
–
CK/CK# Clocks
R34 CAL
×1 = – (4)
×2 = 200 -ohmDifferential
Command Address (CA),
R34 CAL
–
Chip Select (CS#)
R34 CAL
–
R34 CAL
4.7 K-ohmparallel to GND
Clock Enable (CKE)
(3)
(5)
Notes to Table:
1. R is effective series output impedance.
2. CAL is OCT with calibration.
3. Intel recommends that you use a 4.7 K-ohmparallel to GND if your design meets the power sequencing requirements
of the LPDDR2 SDRAM component. Refer to the LPDDR2 SDRAM data sheet for further information.
4. ×1 is a single-device load.
5. ×2 is a double-device load. An alternative option is to use a 100 -ohm differential termination at the trace split.
Note:
The recommended termination schemes in the above table are based on 2.8 inch
maximum trace length analysis. You may add the external termination resistor or
adjust the drive strength to improve signal integrity for longer trace lengths.
Recommendations for external termination are as follows:
•
Class I termination (50 ohms parallel to VTT at the memory end) — Unidirectional
signal (Command Address, control, and CK/CK# signals)
•
Class II termination (50 ohms parallel to VTT at both ends) — Bidirectional signal
( DQ and DQS/DQS# signal)
Intel recommends that you simulate your design to ensure good signal integrity.
External Memory Interface Handbook Volume 2: Design Guidelines
149
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
4.1.3 General Layout Guidelines
The following table lists general board design layout guidelines. These guidelines are
Intel recommendations, and should not be considered as hard requirements. You
should perform signal integrity simulation on all the traces to verify the signal integrity
of the interface. You should extract the slew rate and propagation delay information,
enter it into the IP and compile the design to ensure that timing requirements are
met.
Table 40.
General Layout Guidelines
Parameter
Impedance
Guidelines
•
•
Decoupling Parameter
•
•
•
•
•
Power
•
•
•
•
General Routing
All unused via pads must be removed, because they cause unwanted
capacitance.
Trace impedance plays an important role in the signal integrity. You must
perform board level simulation to determine the best characteristic impedance
for your PCB. For example, it is possible that for multi rank systems 40 ohms
could yield better results than a traditional 50 ohm characteristic impedance.
Use 0.1 uF in 0402 size to minimize inductance
Make VTT voltage decoupling close to termination resistors
Connect decoupling caps between VTT and ground
Use a 0.1 uF cap for every other VTT pin and 0.01 uF cap for every VDD and
VDDQ pin
Verify the capacitive decoupling using the Intel Power Distribution Network
Design Tool
Route GND and VCC as planes
Route VCCIO for memories in a single split plane with at least a 20-mil
(0.020 inches, or 0.508 mm) gap of separation
Route VTT as islands or 250-mil (6.35-mm) power traces
Route oscillators and PLL power as islands or 100-mil (2.54-mm) power traces
All specified delay matching requirements include PCB trace delays, different layer
propagation velocity variance, and crosstalk. To minimize PCB layer propogation
variance, Intel recommends that signals from the same net group always be
routed on the same layer.
• Use 45° angles (not 90° corners)
• Avoid T-Junctions for critical nets or clocks
• Avoid T-junctions greater than 250 mils (6.35 mm)
• Disallow signals across split planes
• Restrict routing other signals close to system reset signals
• Avoid routing memory signals closer than 0.025 inch (0.635 mm) to PCI or
system clocks
Related Links
Power Distribution Network Design Tool
4.1.4 LPDDR2 Layout Guidelines
The following table lists the LPDDR2 SDRAM general routing layout guidelines.
Note:
The following layout guidelines include several +/- length-based rules. These
length-based guidelines are for first order timing approximations if you cannot
simulate the actual delay characteristics of your PCB implementation. They do not
include any margin for crosstalk. Intel recommends that you get accurate time base
skew numbers when you simulate your specific implementation.
External Memory Interface Handbook Volume 2: Design Guidelines
150
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
Table 41.
LPDD2 Layout Guidelines
Parameter
General Routing
Guidelines
•
•
•
Clock Routing
•
•
•
•
•
Address and Command Routing
•
•
External Memory Routing Rules
•
•
If you must route signals of the same net group on different layers with the
same impedance characteristic, simulate your worst case PCB trace tolerances
to ascertain actual propagation delay differences. Typical layer to layer trace
delay variations are of 15 ps/inch order.
Avoid T-junctions greater than 75 ps (approximately 25 mils, 6.35 mm).
Match all signals within a given DQ group with a maximum skew of ±10 ps and
route on the same layer.
Route clocks on inner layers with outer-layer run lengths held to under 150 ps.
These signals should maintain a 10-mil (0.254 mm) spacing from other nets.
Clocks should maintain a length-matching between clock pairs of ±5 ps.
Differential clocks should maintain a length-matching between P and N signals
of ±2 ps.
Space between different clock pairs should be at least three times the space
between the traces of a differential pair.
To minimize crosstalk, route address, and command signals on a different layer
than the data and data mask signals.
Do not route the differential clock (CK/CK#) and clock enable (CKE) signals
close to the address signals.
Apply the following parallelism rules for the LPDDR2 SDRAM data groups:
— 4 mils for parallel runs < 0.1 inch (approximately 1× spacing relative to
plane distance).
— 5 mils for parallel runs < 0.5 inch (approximately 1× spacing relative to
plane distance).
— 10 mils for parallel runs between 0.5 and 1.0 inches (approximately 2×
spacing relative to plane distance).
— 15 mils for parallel runs between 1.0 and 2.8 inch (approximately 3×
spacing relative to plane distance).
Apply the following parallelism rules for the address/command group and
clocks group:
— 4 mils for parallel runs < 0.1 inch (approximately 1× spacing relative to
plane distance)
— 10 mils for parallel runs < 0.5 inch (approximately 2× spacing relative to
plane distance)
— 15 mils for parallel runs between 0.5 and 1.0 inches (approximately 3×
spacing relative to plane distance)
— 20 mils for parallel runs between 1.0 and 2.8 inches (approximately 4×
spacing relative to plane distance)
Maximum Trace Length
•
Keep traces as short as possible. The maximum trace length of all signals from
the FPGA to the LPDDR2 SDRAM components should be less than 509 ps. Intel
recommends that you simulate your design to ensure good signal integrity.
Trace Matching Guidance
The following layout approach is recommended, based on the preceding
guidelines:
1. Route the differential clocks (CK/CK#) and data strobe (DQS/DQS#) with a
length-matching between P and N signals of ±2 ps.
2. Route the DQS /DQS# associated with a DQ group on the same PCB layer.
Match these DQS pairs to within ±5 ps.
3. Set the DQS/DQS# as the target trace propagation delay for the associated
data and data mask signals.
4. Route the data and data mask signals for the DQ group ideally on the same
layer as the associated DQS/DQS# to within ±10 ps skew of the target DQS/
DQS#.
5. Route the CK/CK# clocks and set as the target trace propagation delays for the
DQ group. Match the CK/CK# clock to within ±50 ps of all the DQS/DQS#.
6. Route the address/control signal group (address, CS, CKE) ideally on the same
layer as the CK/CK# clocks, to within ±20 ps skew of the CK/CK# traces.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
151
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
Parameter
Guidelines
This layout approach provides a good starting point for a design requirement of
the highest clock frequency supported for the LPDDR2 SDRAM interface.
®
Note: You should create your project in the Quartus Prime software with a fully
implemented LPDDR2 interface, and observe the interface timing margins
to determine the actual margins for your design.
Although the recommendations in this chapter are based on simulations, you can
apply the same general principles when determining the best termination scheme,
drive strength setting, and loading style to any board design. Even armed with this
knowledge, it is still critical that you simulate your design with IBIS or HSPICE models,
to determine the quality of signal integrity in your design.
Related Links
Intel Power Distribution Network (PDN) Design tool
4.2 LPDDR3 Guidance
The LPDDR3 SDRAM Controller intellectual property (IP) enables you to implement
®
®
LPDDR3 SDRAM interfaces with Arria 10 and Stratix 10 devices.
For all practical purposes, you can regard the TimeQuest timing analyzer's report on
your memory interface as definitive for a given set of memory and board timing
parameters. You can find timing information under Report DDR in TimeQuest and on
the Timing Analysis tab in the parameter editor.
The following flowchart illustrates the recommended process to follow during the
design phase, to determine timing margin and make iterative improvements to your
design.
Primary Layout
Calculate Setup
and Hold Derating
Adjust Layout to Improve:
• Trace Length Mis-Match
• Signal Reflections (ISI)
• Cross Talk
• Memory Speed Grade
Calculate Channel
Signal Integrity
Calculate Board
Skews
Generate an IP Core that Accurately Represents Your
Memory Subsystem, Including pin-out and Accurate
Parameters in the Parameter Editor’s Board Settings Tab
Run Quartus Prime Compilation with the Generated IP Core
yes
Any Non-Core Timing
Violations in the Report
DDR Panel?
no
Done
External Memory Interface Handbook Volume 2: Design Guidelines
152
Find Memory
Timing Parameters
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
4.2.1 Signal Integrity, Board Skew, and Board Setting Parameters
Channel Signal Integrity
For information on determining channel signal integrity, refer to the wiki page: http://
www.alterawiki.com/wiki/Arria_10_EMIF_Simulation_Guidance.
Board Skew
For information on calculating board skew parameters, refer to Implementing and
Parameterizing Memory IP. The Board Skew Parameter Tool is an interactive tool that
can help you calculate board skew parameters if you know the absolute delay values
for all the memory related traces.
Arria 10 Board Setting Parameters
For Board Setting and layout approach information for Arria 10 devices, refer to the
wiki page: http://www.alterawiki.com/wiki/
Arria_10_EMIF_Simulation_Guidance.
4.2.2 LPDDR3 Layout Guidelines
The following table lists the LPDDR3 SDRAM general routing layout guidelines.
Table 42.
LPDDR3 Layout Guidelines
Parameter
Guidelines
Max Length Discrete
500 ps.
Data Group Skew
Match DM and DQ within 5 ps of DQS.
Address/Command vs Clock Skew
Match Address/Command signals within 10 ps of mem CK.
Package Skew Matching
Yes.
Clock matching
•
•
Spacing Guideline Data/Data Strobe/
Address/Command
3H spacing between any Data and Address/Command traces, where H is distance
to the nearest return path.
Spacing Guideline Mem Clock
5H spacing between mem clock and any other signal, where H is distance to the
nearest return path.
2 ps within a clock pair
5 ps between clock pairs
4.2.3 Package Deskew
Trace lengths inside the device package are not uniform for all package pins. The
nonuniformity of package traces can affect system timing for high frequencies. In the
Quartus II software version 12.0 and later, and the Quartus Prime software, a package
deskew option is available.
If you do not enable the package deskew option, the Quartus Prime software uses the
package delay numbers to adjust skews on the appropriate signals; you do not need
to adjust for package delays on the board traces. If you do enable the package
External Memory Interface Handbook Volume 2: Design Guidelines
153
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
deskew option, the Quartus Prime software does not use the package delay numbers
for timing analysis, and you must deskew the package delays with the board traces for
the appropriate signals for your design.
4.2.3.1 DQ/DQS/DM Deskew
To get the package delay information, follow these steps:
1.
Select the FPGA DQ/DQS Package Skews Deskewed on Board checkbox on
the Board Settings tab of the parameter editor.
2. Generate your IP.
3. Instantiate your IP in the project.
4. Run Analysis and Synthesis in the Quartus Prime software. (Skip this step if you
are using an Arria 10 device.)
5. Run the <core_name>_p0_pin_assignment.tcl script. (Skip this step if you
are using an Arria 10 device.)
6.
Compile your design.
7.
Refer to the All Package Pins compilation report, or find the pin delays displayed
in the <core_name>.pin file.
4.2.3.2 Address and Command Deskew
Deskew address and command delays as follows:
1.
Select the FPGA Address/Command Package Skews Deskewed on Board
checkbox on the Board Settings tab of the parameter editor.
2. Generate your IP.
3. Instantiate your IP in the project.
4. Run Analysis and Synthesis in the Quartus Prime software. (Skip this step if you
are using an Arria 10 device.)
5.
Run the <core_name>_p0_pin_assignment.tcl script. (Skip this step if you
are using an Arria 10 device.)
6. Compile your design.
7. Refer to the All Package Pins compilation report, or find the pin delays displayed
in the <core_name>.pin file.
4.2.3.3 Package Deskew Recommendations for Arria 10 and Stratix 10 Devices
The following table shows package deskew recommendations for all protocols
supported on Arria 10 devices.
As operating frequencies increase, it becomes increasingly critical to perform package
deskew. The frequencies listed in the table are the minimum frequencies for which you
must perform package deskew.
If you plan to use a listed protocol at the specified frequency or higher, you must
perform package deskew. For example, you must perform package deskew if you plan
to use dual-rank DDR4 at 800 MHz or above.
External Memory Interface Handbook Volume 2: Design Guidelines
154
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
Protocol
Minimum Frequency (MHz) for Which to Perform Package Deskew
Single Rank
Dual Rank
Quad Rank
DDR4
933
800
667
DDR3
933
800
667
LPDDR3
667
533
Not required
QDR IV
933
Not applicable
Not applicable
RLDRAM 3
933
667
Not applicable
RLDRAM II
Not required
Not applicable
Not applicable
QDR II, II+, II+ Xtreme
Not required
Not applicable
Not applicable
4.2.3.4 Deskew Example
Consider an example where you want to deskew an interface with 4 DQ pins, 1 DQS
pin, and 1 DQSn pin.
Let’s assume an operating frequency of 667 MHz, and the package lengths for the pins
reported in the .pin file as follows:
dq[0]
dq[1]
dq[2]
dq[3]
dqs
dqs_n
=
=
=
=
=
=
120 ps
120 ps
100 ps
100 ps
80 ps
80 ps
The following figure illustrates this example.
Figure 50.
Deskew Example
Memory
Stratix V
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
120 ps
120 ps
100 ps
100 ps
A
B
C
D
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
mem_dqs
mem_dqs_n
80 ps
80 ps
E
F
mem_dqs
mem_dqs_n
When you perform length matching for all the traces in the DQS group, you must take
package delays into consideration. Because the package delays of traces A and B are
40 ps longer than the package delays of traces E and F, you would need to make the
board traces for E and F 40 ps longer than the board traces for A and B.
External Memory Interface Handbook Volume 2: Design Guidelines
155
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
A similar methodology would apply to traces C and D, which should be 20 ps longer
than the lengths of traces A and B.
The following figure shows this scenario with the length of trace A at 450 ps.
Figure 51.
Deskew Example with Trace Delay Calculations
Memory
Stratix V
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
120 ps
120 ps
100 ps
100 ps
A=450ps
B=A=450ps
C=A+20ps=470ps
C=A+20ps=470ps
mem_dq[0]
mem_dq[1]
mem_dq[2]
mem_dq[3]
mem_dqs
mem_dqs_n
80 ps
80 ps
C=A+40ps=490ps
C=A+40ps=490ps
mem_dqs
mem_dqs_n
When you enter the board skews into the Board Settings tab of the DDR3 parameter
editor, you should calculate the board skew parameters as the sums of board delay
and corresponding package delay. If a pin does not have a package delay (such as
address and command pins), you should use the board delay only.
The example of the preceding figure shows an ideal case where board skews are
perfectly matched. In reality, you should allow plus or minus 10 ps of skew mismatch
within a DQS group (DQ/DQS/DM).
4.2.3.5 Package Migration
Package delays can be different for the same pin in different packages. If you want to
use multiple migratable packages in your system, you should compensate for package
skew as described in this topic. The information in this topic applies to Arria 10,
Stratix V, and Stratix 10 devices.
Assume two migratable packages, device A and device B, and that you want to
compensate for the board trace lengths for device A. Follow these steps:
1.
Compile your design for device A, with the Package Skew option enabled.
2.
Note the skews in the <core_name>.pin file for device A. Deskew these package
skews with board trace lengths as described in the preceding examples.
3. Recompile your design for device A.
4.
For Device B open the parameter editor and deselect Package Deskew option.
5.
Calculate board skew parameters only taking into account the board traces for
Device B and enter that value into the parameter editor for Device B.
6.
Regenerate the IP and recompile the design for Device B.
7. Verify that timing requirements are met for both device A and device B.
External Memory Interface Handbook Volume 2: Design Guidelines
156
4 LPDDR2 and LPDDR3 SDRAM Board Design Guidelines
4.3 Document Revision History
Date
May 2017
Version
2017.5.08
Changes
•
•
Added Stratix 10 to Package Deskew and Package Migration sections.
Rebranded as Intel.
October 2016
2016.10.31
Maintenance release.
May 2016
2016.05.02
•
Changed recommended value of skew mismatch in Deskew Example topic.
November 2015
2015.11.02
•
•
Changed instances of Quartus II to Quartus Prime.
Added content for LPDDR3.
May 2015
2015.05.04
Maintenance release.
December 2014
2014.12.15
Maintenance release.
August 2014
2014.08.15
•
•
December 2013
2013.12.16
November 2012
1.0
Removed millimeter approximations from lengths expressed in
picoseconds in LPDDR2 Layout Guidelines table.
Minor formatting fixes in LPDDR2 Layout Guidelines table.
Consolidated General Layout Guidelines.
Initial release.
External Memory Interface Handbook Volume 2: Design Guidelines
157
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
The following topics provide layout guidelines for you to improve your system's signal
integrity and to successfully implement an RLDRAM II or RLDRAM 3 interface.
The RLDRAM II Controller with UniPHY intellectual property (IP) enables you to
®
®
implement common I/O (CIO) RLDRAM II interfaces with Arria V, Stratix III,
Stratix IV, and Stratix V devices. The RLDRAM 3 UniPHY IP enables you to implement
CIO RLDRAM 3 interfaces with Stratix V and Arria V GZ devices. You can implement
separate I/O (SIO) RLDRAM II or RLDRAM 3 interfaces with the ALTDQ_DQS or
ALTDQ_DQS2 IP cores.
The following topics focus on the following key factors that affect signal integrity:
•
I/O standards
•
RLDRAM II and RLDRAM 3 configurations
•
Signal terminations
•
Printed circuit board (PCB) layout guidelines
I/O Standards
RLDRAM II interface signals use one of the following JEDEC I/O signalling standards:
•
HSTL-15—provides the advantages of lower power and lower emissions.
•
HSTL-18—provides increased noise immunity with slightly greater output voltage
swings.
RLDRAM 3 interface signals use the following JEDEC I/O signalling standards:
HSTL 1.2 V and SSTL-12.
To select the most appropriate standard for your interface, refer to the following:
•
Device Data Sheet for Arria II Devices chapter in the Arria II Device Handbook
•
Device Data Sheet for Arria V Devices chapter in the Arria V Device Handbook
•
Stratix III Device Data Sheet: DC and Switching Characteristics chapter in the
Stratix III Device Handbook
•
DC and Switching Characteristics for Stratix IV Devices chapter in the Stratix IV
Device Handbook
•
DC and Switching Characteristics for Stratix V Devices chapter in the Stratix V
Device Handbook
The RLDRAM II Controller with UniPHY IP defaults to HSTL 1.8 V Class I outputs and
HSTL 1.8 V inputs. The RLDRAM 3 UniPHY IP defaults to HSTL 1.2 V Class I outputs
and HSTL 1.2 V inputs.
Note:
The default for RLDRAM 3 changes from Class I to Class II, supporting up to 933 MHz,
with the release of the Quartus II software version 12.1 SP1.
Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus
and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other
countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in
accordance with Intel's standard warranty, but reserves the right to make changes to any products and services
at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any
information, product, or service described herein except as expressly agreed to in writing by Intel. Intel
customers are advised to obtain the latest version of device specifications before relying on any published
information and before placing orders for products or services.
*Other names and brands may be claimed as the property of others.
ISO
9001:2008
Registered
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Related Links
•
Device Data Sheet for Arria II Devices
•
Device Data Sheet for Arria V Devices
•
Stratix III Device Data Sheet: DC and Switching Characteristics
•
DC and Switching Characteristics for Stratix IV Devices
•
DC and Switching Characteristics for Stratix V Devices
5.1 RLDRAM II Configurations
The RLDRAM II Controller with UniPHY IP supports CIO RLDRAM II interfaces with one
or two devices. With two devices, the interface supports a width expansion
configuration up to 72-bits. The termination and layout principles for SIO RLDRAM II
interfaces are similar to CIO RLDRAM II, except that SIO RLDRAM II interfaces have
unidirectional data buses.
The following figure shows the main signal connections between the FPGA and a single
CIO RLDRAM II component.
Figure 52.
Configuration with a Single CIO RLDRAM II Component
ZQ
RLDRAM II Device
DK/DK QK/QK
(1)
DQ
(3)
DM
CK/K
A/BA
(3)
(1)
(5)
WE
(5)
REF
RQ
CS
VTT
(4)
VTTor V DD
(6)
FGPA
DK/DK
QK/QK
DQ
(2)
(2)
DM
CK/CK
ADDRESS/BANK ADDRESS
WE
REF
CS
External Memory Interface Handbook Volume 2: Design Guidelines
159
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Notes to Figure:
1.
Use external differential termination on DK/DK# and CK/CK#.
2.
Use FPGA parallel on-chip termination (OCT) for terminating QK/QK# and DQ on
reads.
3. Use RLDRAM II component on-die termination (ODT) for terminating DQ and DM
on writes.
4.
Use external discrete termination with fly-by placement to avoid stubs.
5.
Use external discrete termination for this signal, as shown for REF.
6.
Use external discrete termination, as shown for REF, but you may require a pull-up
resistor to VDD as an alternative option. Refer to the RLDRAM II device data sheet
for more information about RLDRAM II power-up sequencing.
The following figure shows the main signal connections between the FPGA and two
CIO RLDRAM II components in a width expansion configuration.
Figure 53.
Configuration with Two CIO RLDRAM II Components in a Width Expansion
Configuration
ZQ
RLDRAM II Device 1
DK/DK QK/QK D
DM CK/CK A/BA/REF/WE CS
(1)
(3)
(4)
(3)
VTT
(5)
RLDRAM II Device 2
RQ
ZQ
RQ
DK/DK QK/QK D
DM CK/CK A/BA/REF/WE CS
(4)
(3)
(1)
(3)
VTTor VDD
(6)
FPGA
Device 1 DK/DK
Device 2 DK/DK
Device 1 QK/QK
Device 2 QK/QK
Device 1 DQ
Device 2 DQ
(2)
(2)
(2)
(2)
Device 1 DM
Device 2 DM
CK/CK
A/BA/REF/WE
CS
Notes to Figure:
1.
Use external differential termination on DK/DK#.
2.
Use FPGA parallel on-chip termination (OCT) for terminating QK/QK# and DQ on
reads.
3. Use RLDRAM II component on-die termination (ODT) for terminating DQ and DM
on writes.
4.
Use external dual 200 Ω differential termination.
5.
Use external discrete termination at the trace split of the balanced T or Y topology.
6.
Use external discrete termination at the trace split of the balanced T or Y topology,
but you may require a pull-up resistor to VDD as an alternative option. Refer to
the RLDRAM II device data sheet for more information about RLDRAM II power-up
sequencing.
External Memory Interface Handbook Volume 2: Design Guidelines
160
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
5.2 RLDRAM 3 Configurations
The RLDRAM 3 UniPHY IP supports interfaces for CIO RLDRAM 3 with one or two
devices. With two devices, the interface supports a width expansion configuration up
to 72-bits. The termination and layout principles for SIO RLDRAM 3 interfaces are
similar to CIO RLDRAM 3, except that SIO RLDRAM 3 interfaces have unidirectional
data buses.
The following figure shows the main signal connections between the FPGA and a single
CIO RLDRAM 3 component.
Figure 54.
Configuration with a Single CIO RLDRAM 3 Component
ZQ
RLDRAM 3 Device
DK/DK QK/QK
(3)
FGPA
DK/DK
QK/QK
(2)
DQ
(2)
DQ
(3)
DM CK/CK A/BA WE
(1)
(5)
(3)
(5)
REF
CS
VTT
(4)
RQ
RESET
VTTor V DD VTTor V DD
(6)
(6)
DM
CK/CK
ADDRESS/BANK ADDRESS
WE
REF
CS
RESET
Notes to Figure:
1.
Use external differential termination on CK/CK#.
2.
Use FPGA parallel on-chip termination (OCT) for terminating QK/QK# and DQ on
reads.
3. Use RLDRAM 3 component on-die termination (ODT) for terminating DQ, DM, and
DK, DK# on writes.
4. Use external discrete termination with fly-by placement to avoid stubs.
5. Use external discrete termination for this signal, as shown for REF.
6. Use external discrete termination, as shown for REF, but you may require a pull-up
resistor to VDD as an alternative option. Refer to the RLDRAM 3 device data sheet
for more information about RLDRAM 3 power-up sequencing.
External Memory Interface Handbook Volume 2: Design Guidelines
161
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
The following figure shows the main signal connections between the FPGA and two
CIO RLDRAM 3 components in a width expansion configuration.
Figure 55.
Configuration with Two CIO RLDRAM 3 Components in a Width Expansion
Configuration
ZQ
RLDRAM 3 Device 1
DK/DK QK/QK
(3)
D
(2)
DM
CK/CK A/BA/REF/WE CS
(2)
(3)
V TT
(4)
RESET
RQ
ZQ
RLDRAM 3 Device 2
DK/DK QK/QK
(3)
D
(2)
DM
CK/CK A/BA/REF/WE
(2)
(3)
CS
RQ
RESET
V TTor V DD V TTor V DD
(5)
(5)
FPGA
Device 1 DK/DK
Device 2 DK/DK
Device 1 QK/QK
Device 2 QK/QK
(1)
(1)
Device 1 DQ
(1)
Device 2 DQ
(1)
Device 1 DM
Device 2 DM
CK/CK
A/BA/REF/WE
CS
RESET
Notes to Figure:
1. Use FPGA parallel OCT for terminating QK/QK# and DQ on reads.
2. Use RLDRAM 3 component ODT for terminating DQ, DM, and DK on writes.
3. Use external dual 200 Ω differential termination.
4. Use external discrete termination at the trace split of the balanced T or Y topology.
5. Use external discrete termination at the trace split of the balanced T or Y topology,
but you may require a pull-up resistor to VDD as an alternative option. Refer to
the RLDRAM 3 device data sheet for more information about RLDRAM 3 power-up
sequencing.
5.3 Signal Terminations
The following table lists the on-chip series termination (RS OCT) and on-chip parallel
termination (RT OCT) schemes for supported devices.
Note:
For RLDRAM 3, the default output termination resistance (RS) changes from 50 ohm to
25 ohm with the release of the Quartus II software version 12.1 SP1.
Table 43.
On-Chip Termination Schemes
Termination Scheme
RS OCT without Calibration
Class I Signal Standards
RLDRAM II - HSTL-15 and
HSTL-18
FPGA Device
Arria II GZ, Stratix III,
and Stratix IV
Arria V and Stratix V
Row/Column I/O
Row/Column I/O
50
50
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
162
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Termination Scheme
Class I Signal Standards
FPGA Device
Arria II GZ, Stratix III,
and Stratix IV
Arria V and Stratix V
Row/Column I/O
Row/Column I/O
RLDRAM 3 - HSTL 1.2 V
RS OCT with Calibration
RLDRAM II - HSTL-15 and
HSTL-18
RLDRAM 3 - HSTL 1.2 V
50
50
(1)
RT OCT with Calibration
RLDRAM II - HSTL-15 and
HSTL-18
RLDRAM 3 - HSTL 1.2 V
50
50
(1)
Note to Table:
1. Although 50-ohms is the recommended option, Stratix V devices offer a wider range of calibrated termination
impedances.
RLDRAM II and RLDRAM 3 CIO interfaces have bidirectional data paths. The UniPHY IP
uses dynamic OCT on the datapath, which switches between series OCT for memory
writes and parallel OCT for memory reads. The termination schemes also follow these
characteristics:
•
Although 50 -ohm. is the recommended option, Stratix V devices offer a wider
range of calibrated termination impedances.
•
RS OCT supports output buffers.
•
RT OCT supports input buffers.
•
RS OCT supports bidirectional buffers only when they are driving output signals.
•
RT OCT bidirectional buffers only when they are input signals.
For Arria II GZ, Stratix III, and Stratix IV devices, the HSTL Class I I/O calibrated
terminations are calibrated against 50-ohm 1% resistors connected to the R UP and R
DN pins in an I/O bank with the same VCCIO as the RLDRAM II interface. For Arria V
and Stratix V devices, the HSTL Class I I/O calibrated terminations are calibrated
against 100-ohm 1% resistors connected to the R ZQ pins in an I/O bank with the
same VCCIO as the RLDRAM II and RLDRAM 3 interfaces.
The calibration occurs at the end of the device configuration.
RLDRAM II and RLDRAM 3 memory components have a ZQ pin that connects through a
resistor RQ to ground. Typically the RLDRAM II and RLDRAM 3 output signal
impedance is a fraction of R Q. Refer to the RLDRAM II and RLDRAM 3 device data
sheets for more information.
For information about OCT, refer to the following:
•
I/O Features in Arria II Devices chapter in the Arria II Device Handbook
•
I/O Features in Arria V Devices chapter in the Arria V Device Handbook
•
Stratix III Device I/O Features chapter in the Stratix III Device Handbook
•
I/O Features in Stratix IV Devices chapter in the Stratix IV Device Handbook
•
I/O Features in Stratix V Devices chapter in the Stratix V Device Handbook
Intel strongly recommends signal terminations to optimize signal integrity and timing
margins, and to minimize unwanted emissions, reflections, and crosstalk.
External Memory Interface Handbook Volume 2: Design Guidelines
163
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Note:
Simulate your design to check your termination scheme.
Related Links
•
I/O Features in Arria II Devices
•
I/O Features in Arria V Devices
•
Stratix III Device I/O Features
•
I/O Features in Stratix IV Devices
•
I/O Features in Stratix V Devices
5.3.1 Input to the FPGA from the RLDRAM Components
The RLDRAM II or RLDRAM 3 component drives the following input signals into the
FPGA:
•
Read data (DQ on the bidirectional data signals for CIO RLDRAM II and CIO
RLDRAM 3).
•
Read clocks (QK/QK#).
Intel recommends that you use the FPGA parallel OCT to terminate the data on reads
and read clocks.
5.3.2 Outputs from the FPGA to the RLDRAM II and RLDRAM 3
Components
The following output signals are from the FPGA to the RLDRAM II and RLDRAM 3
components:
•
Write data (DQ on the bidirectional data signals for CIO RLDRAM II and RLDRAM 3)
•
Data mask (DM)
•
Address, bank address
•
Command (CS, WE, and REF)
•
Clocks (CK/CK# and DK/DK#)
For point-to-point single-ended signals requiring external termination, Intel
recommends that you place a fly-by termination by terminating at the end of the
transmission line after the receiver to avoid unterminated stubs. The guideline is to
place the fly-by termination within 100 ps propagation delay of the receiver.
Although not recommended, you can place the termination before the receiver, which
leaves an unterminated stub. The stub delay is critical because the stub between the
termination and the receiver is effectively unterminated, causing additional ringing
and reflections. Stub delays should be less than 50 ps.
Intel recommends that the differential clocks, CK, CK# and DK, DK# (RLDRAM II) and
CK, CK# (RLDRAM 3), use a differential termination at the end of the trace at the
external memory component. Alternatively, you can terminate each clock output with
a parallel termination to VTT.
External Memory Interface Handbook Volume 2: Design Guidelines
164
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
5.3.3 RLDRAM II Termination Schemes
The following table lists the recommended termination schemes for major CIO
RLDRAM II memory interface signals. These signals include data (DQ), data mask
(DM), clocks (CK, CK#, DK, DK#, QK, and QK#), address, bank address, and command
(WE#, REF#, and CS#).
Table 44.
RLDRAM II Termination Recommendations for Arria II GZ, Arria V, Stratix III,
Stratix IV, and Stratix V Devices
Signal Type
HSTL 15/18 Standard
(1) (2) (3)
Memory End Termination
(4)
DK/DK# Clocks
Class I R50 NO CAL
100 -ohmDifferential
QK/QK# Clocks
Class I P50 CAL
ZQ50
Data (Write)
Class I R50 CAL
ODT
Data (Read)
Class I P50 CAL
ZQ50
Data Mask
Class I R50 CAL
ODT
CK/CK# Clocks
Class I R50 NO CAL
×1 = 100-ohm Differential
×2 = 200-ohm Differential
Address/Bank Address
Command (WE#, REF#)
Command (CS#)
QVLD
(8)
(5) (6)
(5) (6)
(5) (6) (7)
Class I Max Current
50 -ohmParallel to VTT
Class I Max Current
50 -ohmParallel to VTT
Class I Max Current
50 -ohmParallel to VTT
or Pull-up to VDD
Class I P50 CAL
ZQ50
(9)
(10)
Notes to Table:
1. R is effective series output impedance.
2. P is effective parallel input impedance.
3. CAL is OCT with calibration.
4. NO CAL is OCT without calibration.
5. For width expansion configuration, the address and control signals are routed to 2 devices. Recommended termination
is 50 -ohm parallel to VTT at the trace split of a balanced T or Y routing topology. Use a clamshell placement of the two
RLDRAM II components to achieve minimal stub delays and optimum signal integrity. Clamshell placement is when two
devices overlay each other by being placed on opposite sides of the PCB.
6. The UniPHY default IP setting for this output is Max Current. A Class I 50 -ohm output with calibration output is
typically optimal in single load topologies.
7. Intel recommends that you use a 50 -ohmparallel termination to VTT if your design meets the power sequencing
requirements of the RLDRAM II component. Refer to the RLDRAM II data sheet for further information.
8. QVLD is not used in the RLDRAM II Controller with UniPHY implementations.
9. ×1 is a single-device load.
10.×2 is a double-device load. An alternative option is to use a 100 -ohm differential termination at the trace split.
Note:
Intel recommends that you simulate your specific design for your system to ensure
good signal integrity.
5.3.4 RLDRAM 3 Termination Schemes
The following table lists the recommended termination schemes for major CIO
RLDRAM 3 memory interface signals. These signals include data (DQ), data mask (DM),
clocks (CK, CK#, DK, DK#, QK, and QK#), address, bank address, and command (WE#,
REF#, and CS#).
External Memory Interface Handbook Volume 2: Design Guidelines
165
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Table 45.
RLDRAM 3 Termination Recommendations for Arria V GZ and Stratix V
Devices
Signal Type
Memory End Termination Option in
the Chip (ODT)
Recommended On-Board
Terminations
Data Read (DQ, QK)
40, 60 (series)
None
Data Write (DQ, DM, DK)
40, 60, 120 (parallel)
None
Address/Bank Address/ Command
(WE#, REF#, CS#) (1) (2) (3)
None
50-ohm Parallel to Vtt
CK/CK#
None
100 ohm Differential
Notes to Table:
1. For width expansion configuration, the address and control signals are routed to 2 devices. Recommended termination
is 50-ohm parallel to VTT at the trace split of a balanced T or Y routing topology. Use a clamshell placement of the two
RLDRAM 3 components to achieve minimal stub delays and optimum signal integrity. Clamshell placement is when two
devices overlay each other by being placed on opposite sides of the PCB.
2. The UniPHY default IP setting for this output is Max Current. A Class I 50-ohm output with calibration output is typically
optimal in single load topologies.
3. Intel recommends that you use a 50-ohm parallel termination to VTT if your design meets the power sequencing
requirements of the RLDRAM 3 component. Refer to the RLDRAM 3 data sheet for further information.
4. QVLD is not used in the RLDRAM 3 Controller with UniPHY implementations.
5. For information on the I/O standards and on-chip termination (OCT) resistance values supported for RLDRAM 3, refer to
the I/O Features chapter of the appropriate device handbook.
Intel recommends that you simulate your specific design for your system to ensure
good signal integrity.
5.4 PCB Layout Guidelines
®
Intel recommends that you create your project in the Quartus Prime software with a
fully implemented RLDRAM II Controller with UniPHY interface, or RLDRAM 3 with
UniPHY IP, and observe the interface timing margins to determine the actual margins
for your design.
Although the recommendations in this chapter are based on simulations, you can
apply the same general principles when determining the best termination scheme,
drive strength setting, and loading style to any board designs. Intel recommends that
you perform simulations, either using IBIS or HSPICE models, to determine the quality
of signal integrity on your designs, and that you get accurate time base skew numbers
when you simulate your specific implementation.
Note:
1. The following layout guidelines include several +/- length-based rules. These
length-based guidelines are for first order timing approximations if you cannot
simulate the actual delay characteristics of your PCB implementation. They do not
include any margin for crosstalk.
2. To reliably close timing to and from the periphery of the device, signals to and
from the periphery should be registered before any further logic is connected.
Related Links
Intel Power Distribution Network (PDN) Design Tool
External Memory Interface Handbook Volume 2: Design Guidelines
166
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
5.5 General Layout Guidelines
The following table lists general board design layout guidelines. These guidelines are
Intel recommendations, and should not be considered as hard requirements. You
should perform signal integrity simulation on all the traces to verify the signal integrity
of the interface. You should extract the slew rate and propagation delay information,
enter it into the IP and compile the design to ensure that timing requirements are
met.
Table 46.
General Layout Guidelines
Parameter
Impedance
Guidelines
•
•
Decoupling Parameter
•
•
•
•
•
Power
•
•
•
•
General Routing
All unused via pads must be removed, because they cause unwanted
capacitance.
Trace impedance plays an important role in the signal integrity. You must
perform board level simulation to determine the best characteristic impedance
for your PCB. For example, it is possible that for multi rank systems 40 ohms
could yield better results than a traditional 50 ohm characteristic impedance.
Use 0.1 uF in 0402 size to minimize inductance
Make VTT voltage decoupling close to termination resistors
Connect decoupling caps between VTT and ground
Use a 0.1 uF cap for every other VTT pin and 0.01 uF cap for every VDD and
VDDQ pin
Verify the capacitive decoupling using the Intel Power Distribution Network
Design Tool
Route GND and VCC as planes
Route VCCIO for memories in a single split plane with at least a 20-mil
(0.020 inches, or 0.508 mm) gap of separation
Route VTT as islands or 250-mil (6.35-mm) power traces
Route oscillators and PLL power as islands or 100-mil (2.54-mm) power traces
All specified delay matching requirements include PCB trace delays, different layer
propagation velocity variance, and crosstalk. To minimize PCB layer propogation
variance, Intel recommends that signals from the same net group always be
routed on the same layer.
• Use 45° angles (not 90° corners)
• Avoid T-Junctions for critical nets or clocks
• Avoid T-junctions greater than 250 mils (6.35 mm)
• Disallow signals across split planes
• Restrict routing other signals close to system reset signals
• Avoid routing memory signals closer than 0.025 inch (0.635 mm) to PCI or
system clocks
Related Links
Power Distribution Network Design Tool
5.6 RLDRAM II and RLDRAM 3 Layout Guidelines
The following table lists the RLDRAM II and RLDRAM 3 general routing layout
guidelines. These guidelines apply to Arria V, Arria 10, Stratix V, and Stratix 10
devices.
External Memory Interface Handbook Volume 2: Design Guidelines
167
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Table 47.
RLDRAM II and RLDRAM 3 Layout Guidelines
Parameter
General Routing
Guidelines
•
•
•
Clock Routing
•
•
•
•
•
Address and Command Routing
•
•
•
•
If you must route signals of the same net group on different layers with the
same impedance characteristic, simulate your worst case PCB trace tolerances
to ascertain actual propagation delay differences. Typical layer to layer trace
delay variations are of 15 ps/inch order.
Avoid T-junctions greater than 150 ps.
Match all signals within a given DQ group with a maximum skew of ±10 ps and
route on the same layer.
Route clocks on inner layers with outer-layer run lengths held to under 150 ps.
These signals should maintain a 10-mil (0.254 mm) spacing from other nets.
Clocks should maintain a length-matching between clock pairs of ±5 ps.
Differential clocks should maintain a length-matching between P and N signals
of ±2 ps.
Space between different clock pairs should be at least three times the space
between the traces of a differential pair.
To minimize crosstalk, route address, bank address, and command signals on a
different layer than the data and data mask signals.
Do not route the differential clock signals close to the address signals.
Keep the distance from the pin on the RLDRAM II or RLDRAM 3 component to
the stub termination resistor (VTT) to less than 50 ps for the address/command
signal group.
Keep the distance from the pin on the RLDRAM II or RLDRAM 3 component to
the fly-by termination resistor (VTT) to less than 100 ps for the address/
command signal group.
External Memory Routing Rules
•
Apply the following parallelism rules for the RLDRAM II or RLDRAM 3 data/
address/command groups:
— 4 mils for parallel runs < 0.1 inch (approximately 1× spacing relative to
plane distance).
— 5 mils for parallel runs < 0.5 inch (approximately 1× spacing relative to
plane distance).
— 10 mils for parallel runs between 0.5 and 1.0 inches (approximately 2×
spacing relative to plane distance).
— 15 mils for parallel runs between 1.0 and 3.3 inch (approximately 3×
spacing relative to plane distance).
Maximum Trace Length
•
Keep the maximum trace length of all signals from the FPGA to the RLDRAM II
or RLDRAM 3 components to 600 ps.
Trace Matching Guidance
The following layout approach is recommended, based on the preceding
guidelines:
1. If the RLDRAM II interface has multiple DQ groups (×18 or ×36 RLDRAM II/
RLDRAM 3 component or width expansion configuration), match all the
DK/DK# and QK ,QK # clocks as tightly as possible to optimize the timing
margins in your design.
2. Route the DK/DK# write clock and QK/QK# read clock associated with a DQ
group on the same PCB layer. Match these clock pairs to within ±5 ps.
3. Set the DK/DK# or QK/QK# clock as the target trace propagation delay for the
associated data and data mask signals.
4. Route the data and data mask signals for the DQ group ideally on the same
layer as the associated QK/QK# and DK/DK# clocks to within ±10 ps skew of
the target clock.
5. Route the CK/CK# clocks and set as the target trace propagation delays for the
address/command signal group. Match the CK/CK# clock to within ±50 ps of
all the DK/DK# clocks.
6. Route the address/control signal group (address, bank address, CS, WE, and
REF) ideally on the same layer as the CK/CK# clocks, to within ±20 ps skew of
the CK/CK# traces.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
168
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Parameter
Guidelines
Note: It is important to match the delays of CK vs. DK, and CK vs. Addr-Cmd as
much as possible.
This layout approach provides a good starting point for a design requirement of
the highest clock frequency supported for the RLDRAM II and RLDRAM 3
interfaces.
5.7 Layout Approach
For all practical purposes, you can regard the TimeQuest timing analyzer's report on
your memory interface as definitive for a given set of memory and board timing
parameters.
You will find timing under Report DDR in TimeQuest and on the Timing Analysis tab
in the parameter editor.
The following flowchart illustrates the recommended process to follow during the
board design phase, to determine timing margin and make iterative improvements to
your design.
Primary Layout
Calculate Setup
and Hold Derating
Adjust Layout to Improve:
• Trace Length Mis-Match
• Signal Reflections (ISI)
• Cross Talk
• Memory Speed Grade
Calculate Channel
Signal Integrity
Calculate Board
Skews
Find Memory
Timing Parameters
Generate an IP Core that Accurately Represents Your
Memory Subsystem, Including pin-out and Accurate
Parameters in the Parameter Editor’s Board Settings Tab
Run Quartus Prime Compilation with the Generated IP Core
yes
Any Non-Core Timing
Violations in the Report
DDR Panel?
no
Done
Board Skew
For information on calculating board skew parameters, refer to Implementing and
Parameterizing Memory IP, in the External Memory Interface Handbook .
The Board Skew Parameter Tool is an interactive tool that can help you calculate board
skew parameters if you know the absolute delay values for all the memory related
traces.
External Memory Interface Handbook Volume 2: Design Guidelines
169
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
Memory Timing Parameters
For information on the memory timing parameters to be entered into the parameter
editor, refer to the datasheet for your external memory device.
Related Links
Board Skew Parameter Tool
5.7.1 Arria V and Stratix V Board Setting Parameters
The following guidelines apply to the Board Setting parameters for Arria V and Stratix
V devices.
Setup and Hold Derating
For information on calculating derating parameters, refer to Implementing and
Parameterizing Memory IP, in the External Memory Interface Handbook.
Channel Signal Integrity
For information on determining channel signal integrity for Stratix V and earlier
products, refer to the wiki page: http://www.alterawiki.com/wiki/
Measuring_Channel_Signal_Integrity .
Board Skew
For information on calculating board skew parameters, refer to Implementing and
Parameterizing Memory IP, in the External Memory Interface Handbook.
The Board Skew Parameter Tool is an interactive tool that can help you calculate board
skew parameters if you know the absolute delay values for all the memory related
traces.
Memory Timing Parameters
For information on the memory timing parameters to be entered into the parameter
editor, refer to the datasheet for your external memory device.
5.7.2 Arria 10 Board Setting Parameters
For Board Setting and layout approach information for Arria 10 devices, refer to the
wiki at the address below.
Arria 10 Board Setting and layout approach information: http://www.alterawiki.com/
wiki/Arria_10_EMIF_Simulation_Guidance .
5.8 Package Deskew for RLDRAM II and RLDRAM 3
You should follow Intel's package deskew guidance if you are using Arria 10, Stratix
10, or Stratix V devices.
For more information on package deskew, refer to Package Deskew.
Related Links
Package Deskew
External Memory Interface Handbook Volume 2: Design Guidelines
170
5 RLDRAM II and RLDRAM 3 Board Design Guidelines
5.9 Document Revision History
Date
May 2017
Version
2017.5.08
Changes
•
•
Added Stratix 10 to RLDRAM II and RLDRAM 3 Layout Guidelines section.
Rebranded as Intel.
October 2016
2016.10.31
Maintenance release.
May 2016
2016.05.02
Maintenance release.
November 2015
2015.11.02
•
May 2015
2015.05.04
Maintenance release.
December 2014
2014.12.15
Maintenance release.
August 2014
2014.08.15
•
•
Revised RLDRAM 3 Termination Recommendations for Arria V GZ and
Stratix V Devices table.
Removed millimeter approximations from lengths expressed in
picoseconds in RLDRAM II and RLDRAM 3 Layout Guidelines table.
Minor formatting fixes in RLDRAM II and RLDRAM 3 Layout Guidelines
table.
Added Layout Approach section.
•
•
Added note about byteenable support to Signal Descriptions section.
Consolidated General Layout Guidelines.
•
•
December 2013
2013.12.16
November 2012
3.2
Changed instances of Quartus II to Quartus Prime.
Added content supporting RLDRAM 3 and updated RLDRAM II standards.
June 2012
3.1
Added Feedback icon.
November 2011
3.0
Added Arria V information.
June 2011
2.0
Added Stratix V information.
December 2010
1.0
Initial release.
External Memory Interface Handbook Volume 2: Design Guidelines
171
6 QDR II and QDR-IV SRAM Board Design Guidelines
6 QDR II and QDR-IV SRAM Board Design Guidelines
The following topics provide guidelines for you to improve your system's signal
integrity and layout guidelines to help successfully implement a QDR II or QDR II+
SRAM interface in your system.
The QDR II and QDR II+ SRAM Controller with UniPHY intellectual property (IP)
®
enables you to implement QDR II and QDR II+ interfaces with Arria II GX, Arria V,
®
Stratix III, Stratix IV, and Stratix V devices.
Note:
In the following topics, QDR II SRAM refers to both QDR II and QDR II+ SRAM unless
stated otherwise.
The following topics focus on the following key factors that affect signal integrity:
•
I/O standards
•
QDR II SRAM configurations
•
Signal terminations
•
Printed circuit board (PCB) layout guidelines
I/O Standards
QDR II SRAM interface signals use one of the following JEDEC I/O signalling
standards:
•
HSTL-15—provides the advantages of lower power and lower emissions.
•
HSTL-18—provides increased noise immunity with slightly greater output voltage
swings.
To select the most appropriate standard for your interface, refer to the Arria II GX
Devices Data Sheet: Electrical Characteristics chapter in the Arria II Device Handbook,
Stratix III Device Datasheet: DC and Switching Characteristics chapter in the Stratix
III Device Handbook, or the Stratix IV Device Datasheet DC and Switching
Characteristics chapter in the Stratix IV Device Handbook.
Altera QDR II SRAM Controller with UniPHY IP defaults to HSTL 1.5 V Class I outputs
and HSTL 1.5 V inputs.
Related Links
•
Arria II GX Devices Data Sheet: Electrical Characteristics
•
Stratix III Device Datasheet: DC and Switching Characteristics
•
Stratix IV Device Datasheet DC and Switching Characteristics
Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus
and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other
countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in
accordance with Intel's standard warranty, but reserves the right to make changes to any products and services
at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any
information, product, or service described herein except as expressly agreed to in writing by Intel. Intel
customers are advised to obtain the latest version of device specifications before relying on any published
information and before placing orders for products or services.
*Other names and brands may be claimed as the property of others.
ISO
9001:2008
Registered
6 QDR II and QDR-IV SRAM Board Design Guidelines
6.1 QDR II SRAM Configurations
The QDR II SRAM Controller with UniPHY IP supports interfaces with a single device,
and two devices in a width expansion configuration up to maximum width of 72 bits.
The following figure shows the main signal connections between the FPGA and a single
QDR II SRAM component.
Figure 56.
Configuration With A Single QDR II SRAM Component
ZQ
QDR II Device
RQ
DOFF
Q
CQ/CQ
D
(3)
VTT
DOFFn
DATA IN
CQ/CQn
BWS
(3)
K/K
(3)
A
(3)
WPS
(3)
RPS
VTT
(4)
VTT
(1)
(2)
DATA OUT
BWSn
K/Kn
ADDRESS
WPSn
RPSn
The following figure shows the main signal connections between the FPGA and two
QDR II SRAM components in a width expansion configuration.
External Memory Interface Handbook Volume 2: Design Guidelines
173
6 QDR II and QDR-IV SRAM Board Design Guidelines
Figure 57.
Configuration With Two QDR II SRAM Components In A Width Expansion
Configuration
ZQ
QDR II SRAM Device 1
DOFF
Q
DOFF
CQ/CQn D
VTT
VTT VTT VTT
(1)
(2)
ZQ
QDR II SRAM Device 2
RQ
(3)
BWS
K/K
VTT
(3)
VTT
A
WPS
(4)
RPS
Q
(4)
CQ/CQn D
VTT
(3)
(3)
BWS
VTT
(3)
K/K
VTT
A
(4)
WPS RPS
(4)
(3)
VTT
(5)
(2)
DOFFn
DATA IN
CQ/CQn0
CQ/CQn1
DATA OUT
BWSn
K0/K0n
K1/K1n
ADDRESS
WPSn
RPSn
The following figure shows the detailed balanced topology recommended for the
address and command signals in the width expansion configuration.
Figure 58.
External Parallel Termination for Balanced Topology
TL2
FPGA
TL1
QDRII Memory
VTT
(1)
TL2
QDRII Memory
6.2 Signal Terminations
Arria II GX, Stratix III and Stratix IV devices offer on-chip termination (OCT)
technology.
The following table summarizes the extent of OCT support for each device.
External Memory Interface Handbook Volume 2: Design Guidelines
174
RQ
6 QDR II and QDR-IV SRAM Board Design Guidelines
Table 48.
On-Chip Termination Schemes
Terminatio
n Scheme
HSTL-15
and
HSTL-18
(1)
FPGA Device
Arria II GX
Column I/O
Arria II GZ, Stratix III,
and Stratix IV
Row I/O
Column I/O
Row I/O
Arria V and Stratix V
Column I/O
Row I/O
On-Chip
Series
Termination
without
Calibration
Class I
50
50
50
50
—
—
On-Chip
Series
Termination
with
Calibration
Class I
50
50
50
50
—
—
On-Chip
Parallel
Termination
with
Calibration
Class I
—
—
50
50
50
50
Note to Table:
1. This table provides information about HSTL-15 and HSTL-18 standards because these are the supported I/O standards
for QDR II SRAM memory interfaces by Intel FPGAs.
On-chip series (RS) termination is supported only on output and bidirectional buffers,
while on-chip parallel (RT) termination is supported only on input and bidirectional
buffers. Because QDR II SRAM interfaces have unidirectional data paths, dynamic OCT
is not required.
For Arria II GX, Stratix III and Stratix IV devices, the HSTL Class I I/O calibrated
terminations are calibrated against 50-ohm 1% resistors connected to the RUP and RDN
pins in an I/O bank with the same VCCIO as the QDRII SRAM interface. The calibration
occurs at the end of the device configuration.
QDR II SRAM controllers have a ZQ pin which is connected via a resistor RQ to
ground. Typically the QDR II SRAM output signal impedance is 0.2 × RQ. Refer to the
QDR II SRAM device data sheet for more information.
For information about OCT, refer to the I/O Features in Arria II GX Devices chapter in
the Arria II GX Device Handbook, I/O Features in Arria V Devices chapter in the
Arria V Device Handbook, Stratix III Device I/O Features chapter in the Stratix III
Device Handbook, I/O Features in Stratix IV Devices chapter in the Stratix IV Device
Handbook , and the I/O Features in Stratix V Devices chapter in the Stratix V Device
Handbook .
Related Links
•
I/O Features in Arria II GX Devices
•
I/O Features in Arria V Devices
•
Stratix III Device I/O Features
•
I/O Features in Stratix IV Devices
•
I/O Features in Stratix V Devices
External Memory Interface Handbook Volume 2: Design Guidelines
175
6 QDR II and QDR-IV SRAM Board Design Guidelines
6.2.1 Output from the FPGA to the QDR II SRAM Component
The following output signals are from the FPGA to the QDR II SRAM component:
•
write data
•
byte write select (BWSn)
•
address
•
control (WPSn and RPSn)
•
clocks, K/K#
Intel recommends that you terminate the write clocks, K and K#, with a single-ended
fly-by 50-ohmparallel termination to VTT. However, simulations show that you can
consider a differential termination if the clock pair is well matched and routed
differentially.
Intel strongly recommends signal terminations to optimize signal integrity and timing
margins, and to minimize unwanted emissions, reflections, and crosstalk.
For point-to-point signals, Intel recommends that you place a fly-by termination by
terminating at the end of the transmission line after the receiver to avoid
unterminated stubs. The guideline is to place the fly-by termination within 100 ps
propagation delay of the receiver.
Although not recommended, you can place the termination before the receiver, which
leaves an unterminated stub. The stub delay is critical because the stub between the
termination and the receiver is effectively unterminated, causing additional ringing
and reflections. Stub delays should be less than 50 ps.
Note:
Simulate your design to ensure correct functionality.
6.2.2 Input to the FPGA from the QDR II SRAM Component
The QDR II SRAM component drives the following input signals into the FPGA:
•
read data
•
echo clocks, CQ/CQ#
For point-to-point signals, Intel recommends that you use the FPGA parallel OCT
wherever possible. For devices that do not support parallel OCT (Arria II GX), and for
×36 emulated configuration CQ/CQ# termination, Intel recommends that you use a
fly-by 50-ohm parallel termination to VTT. Although not recommended, you can use
parallel termination with a short stub of less that 50 ps propagation delay as an
alternative option. The input echo clocks, CQ and CQ# must not use a differential
termination.
6.2.3 Termination Schemes
The following tables list the recommended termination schemes for major QDR II
SRAM memory interface signals.
These signals include write data (D), byte write select (BWS), read data (Q), clocks (K,
K#, CQ, and CQ#), address and command (WPS and RPS).
External Memory Interface Handbook Volume 2: Design Guidelines
176
6 QDR II and QDR-IV SRAM Board Design Guidelines
Table 49.
Termination Recommendations for Arria II GX Devices
Signal Type
HSTL 15/18 Standard
FPGA End Discrete
Termination
(1) (2)
Memory End Termination
K/K# Clocks
Class I R50 CAL
—
50-ohm Parallel to VTT
Write Data
Class I R50 CAL
—
50-ohm Parallel to VTT
BWS
Class I R50 CAL
—
50-ohm Parallel to VTT
Class I Max Current
—
50-ohm Parallel to VTT
Class I Max Current
—
50-ohm Parallel to VTT
Class I
50-ohm Parallel to VTT
ZQ50
Class I
50-ohm Parallel to VTT
ZQ50
Class I
50-ohm Parallel to VTT
ZQ50
—
—
ZQ50
Address
(3) (4)
WPS, RPS
(3) (4)
CQ/CQ#
CQ/CQ#
×36 emulated
(5)
Read Data (Q)
QVLD
(6)
Notes to Table:
1. R is effective series output impedance.
2. CAL is calibrated OCT.
3. For width expansion configuration, the address and control signals are routed to 2 devices. Recommended termination
is 50 -ohm parallel to VTT at the trace split of a balanced T or Y routing topology. For 400 MHz burst length 2
configurations where the address signals are double data rate, it is recommended to use a clamshell placement of the
two QDR II SRAM components to achieve minimal stub delays and optimum signal integrity. Clamshell placement is
when two devices overlay each other by being placed on opposite sides of the PCB.
4. A Class I 50 -ohm output with calibration output is typically optimal in double load topologies.
5. For ×36 emulated mode, the recommended termination for the CQ/CQ# signals is a 50 -ohm parallel termination to VTT
at the trace split. Intel recommends that you use this termination when ×36 DQ/DQS groups are not supported in the
FPGA.
6. QVLD is not used in the QDR II or QDR II+ SRAM with UniPHY implementations.
Table 50.
Termination Recommendations for Arria V, Stratix III, Stratix IV, and
Stratix V Devices
Signal Type
HSTL 15/18 Standard
FPGA End Discrete
Termination
(1) (2) (3)
Memory End Termination
K/K# Clocks
DIFF Class I R50 NO CAL
—
Series 50 -ohm Without
Calibration
Write Data
Class I R50 CAL
—
50 -ohmParallel to VTT
BWS
Class I R50 CAL
—
50 -ohmParallel to VTT
Class I Max Current
—
50 -ohmParallel to VTT
Class I Max Current
—
50 -ohmParallel to VTT
Class I P50 CAL
—
ZQ50
—
50 -ohm Parallel to VTT
ZQ50
Address
(4) (5)
WPS, RPS
(4) (5)
CQ/CQ#
CQ/CQ# ×36 emulated
(6)
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
177
6 QDR II and QDR-IV SRAM Board Design Guidelines
Signal Type
HSTL 15/18 Standard
FPGA End Discrete
Termination
(1) (2) (3)
Read Data (Q)
QVLD
(7)
Memory End Termination
Class I P50 CAL
—
ZQ50
Class I P50 CAL
—
ZQ50
Notes to Table:
1. R is effective series output impedance.
2. P is effective parallel input impedance.
3. CAL is calibrated OCT.
4. For width expansion configuration, the address and control signals are routed to 2 devices. Recommended termination
is 50-ohm parallel to VTT at the trace split of a balanced T or Y routing topology. For 400 MHz burst length 2
configurations where the address signals are double data rate, it is recommended to use a "clam shell" placement of
the two QDR II SRAM components to achieve minimal stub delays and optimum signal integrity. "Clam shell" placement
is when two devices overlay each other by being placed on opposite sides of the PCB.
5. The UniPHY default IP setting for this output is Max Current. A Class 1 50-ohm output with calibration output is
typically optimal in single load topologies.
6. For ×36 emulated mode, the recommended termination for the CQ/CQ# signals is a 50-ohm parallel termination to VTT
at the trace split. Intel recommends that you use this termination when ×36 DQ/DQS groups are not supported in the
FPGA.
7. QVLD is not used in the QDR II or QDR II+ SRAM Controller with UniPHY implementations.
Note:
Intel recommends that you simulate your specific design for your system to ensure
good signal integrity.
For a ×36 QDR II SRAM interface that uses an emulated mode of two ×18 DQS groups
in the FPGA, there are two CQ/CQ# connections at the FPGA and a single CQ/CQ#
output from the QDR II SRAM device. Intel recommends that you use a balanced T
topology with the trace split close to the FPGA and a parallel termination at the split,
as shown in the following figure.
Figure 59.
Emulated ×36 Mode CQ/CQn Termination Topology
FPGA
TL2
CQ
VTT
(1)
TL1
(1)
TL1
TL2
CQ
CQ
TL2
CQn
VTT
CQn
QDRII Memory
TL2
CQ
For more information about ×36 emulated modes, refer to the “Exceptions for ×36
Emulated QDR II and QDR II+ SRAM Interfaces in Arria II GX, Stratix III, and
Stratix IV Devices" section in the Planning Pin and Resources chapter.
Related Links
Planning Pin and FPGA Resources on page 9
This information is for board designers who must determine FPGA pin usage, to
create board layouts. The board design process sometimes occurs concurrently
with the RTL design process.
External Memory Interface Handbook Volume 2: Design Guidelines
178
6 QDR II and QDR-IV SRAM Board Design Guidelines
6.3 General Layout Guidelines
The following table lists general board design layout guidelines. These guidelines are
Intel recommendations, and should not be considered as hard requirements. You
should perform signal integrity simulation on all the traces to verify the signal integrity
of the interface. You should extract the slew rate and propagation delay information,
enter it into the IP and compile the design to ensure that timing requirements are
met.
Table 51.
General Layout Guidelines
Parameter
Impedance
Guidelines
•
•
Decoupling Parameter
•
•
•
•
•
Power
•
•
•
•
General Routing
All unused via pads must be removed, because they cause unwanted
capacitance.
Trace impedance plays an important role in the signal integrity. You must
perform board level simulation to determine the best characteristic impedance
for your PCB. For example, it is possible that for multi rank systems 40 ohms
could yield better results than a traditional 50 ohm characteristic impedance.
Use 0.1 uF in 0402 size to minimize inductance
Make VTT voltage decoupling close to termination resistors
Connect decoupling caps between VTT and ground
Use a 0.1 uF cap for every other VTT pin and 0.01 uF cap for every VDD and
VDDQ pin
Verify the capacitive decoupling using the Intel Power Distribution Network
Design Tool
Route GND and VCC as planes
Route VCCIO for memories in a single split plane with at least a 20-mil
(0.020 inches, or 0.508 mm) gap of separation
Route VTT as islands or 250-mil (6.35-mm) power traces
Route oscillators and PLL power as islands or 100-mil (2.54-mm) power traces
All specified delay matching requirements include PCB trace delays, different layer
propagation velocity variance, and crosstalk. To minimize PCB layer propogation
variance, Intel recommends that signals from the same net group always be
routed on the same layer.
• Use 45° angles (not 90° corners)
• Avoid T-Junctions for critical nets or clocks
• Avoid T-junctions greater than 250 mils (6.35 mm)
• Disallow signals across split planes
• Restrict routing other signals close to system reset signals
• Avoid routing memory signals closer than 0.025 inch (0.635 mm) to PCI or
system clocks
Related Links
Power Distribution Network Design Tool
6.4 QDR II Layout Guidelines
The following table summarizes QDR II and QDR II SRAM general routing layout
guidelines.
External Memory Interface Handbook Volume 2: Design Guidelines
179
6 QDR II and QDR-IV SRAM Board Design Guidelines
Note:
1.
The following layout guidelines include several +/- length based rules. These
length based guidelines are for first order timing approximations if you cannot
simulate the actual delay characteristics of your PCB implementation. They do not
include any margin for crosstalk.
2.
Intel recommends that you get accurate time base skew numbers when you
simulate your specific implementation.
3. To reliably close timing to and from the periphery of the device, signals to and
from the periphery should be registered before any further logic is connected.
Table 52.
QDR II and QDR II+ SRAM Layout Guidelines
Parameter
General Routing
Guidelines
•
•
Clock Routing
•
•
•
•
•
Complementary clocks should maintain a length-matching between P and N signals of
±2 ps.
Keep the distance from the pin on the QDR II SRAM component to stub termination
resistor (VTT) to less than 50 ps for the K, K# clocks.
Keep the distance from the pin on the QDR II SRAM component to fly-by termination
resistor (VTT) to less than 100 ps for the K, K# clocks.
•
Keep the distance from the pin on the FPGA component to stub termination resistor (VTT)
to less than 50 ps for the echo clocks, CQ, CQ#, if they require an external discrete
termination.
Keep the distance from the pin on the FPGA component to fly-by termination resistor
(VTT) to less than 100 ps for the echo clocks, CQ, CQ#, if they require an external
discrete termination.
•
•
•
•
•
Maximum Trace Length
Route clocks on inner layers with outer-layer run lengths held to under 150 ps.
These signals should maintain a 10-mil (0.254 mm) spacing from other nets.
Clocks should maintain a length-matching between clock pairs of ±5 ps.
•
•
External Memory Routing
Rules
If signals of the same net group must be routed on different layers with the same
impedance characteristic, you must simulate your worst case PCB trace tolerances to
ascertain actual propagation delay differences. Typical later to later trace delay variations
are of 15 ps/inch order.
Avoid T-junctions greater than 150 ps.
•
Keep the distance from the pin on the QDR II SRAM component to stub termination
resistor (VTT) to less than 50 ps for the write data, byte write select and address/
command signal groups.
Keep the distance from the pin on the QDR II SRAM component to fly-by termination
resistor (VTT) to less than 100 ps for the write data, byte write select and address/
command signal groups.
Keep the distance from the pin on the FPGA (Arria II GX) to stub termination resistor
(VTT) to less than 50 ps for the read data signal group.
Keep the distance from the pin on the FPGA (Arria II GX) to fly-by termination resistor
(VTT) to less than 100 ps for the read data signal group.
Parallelism rules for the QDR II SRAM data/address/command groups are as follows:
— 4 mils for parallel runs < 0.1 inch (approximately 1× spacing relative to plane
distance).
— 5 mils for parallel runs < 0.5 inch (approximately 1× spacing relative to plane
distance).
— 10 mils for parallel runs between 0.5 and 1.0 inches (approximately 2× spacing
relative to plane distance).
— 15 mils for parallel runs between 1.0 and 6.0 inch (approximately 3× spacing relative
to plane distance).
Keep the maximum trace length of all signals from the FPGA to the QDR II SRAM
components to 6 inches.
Related Links
Intel Power Distribution Network (PDN) Design tool
External Memory Interface Handbook Volume 2: Design Guidelines
180
6 QDR II and QDR-IV SRAM Board Design Guidelines
6.5 QDR II SRAM Layout Approach
Using the layout guidelines in the above table, Intel recommends the following layout
approach:
1.
Route the K/K# clocks and set the clocks as the target trace propagation delays
for the output signal group.
2. Route the write data output signal group (write data, byte write select),
ideally on the same layer as the K/K# clocks, to within ±10 ps skew of the K/K#
traces.
3. Route the address/control output signal group (address, RPS, WPS), ideally on
the same layer as the K/K# clocks, to within ±20 ps skew of the K/K# traces.
4. Route the CQ/CQ# clocks and set the clocks as the target trace propagation delays
for the input signal group.
5. Route the read data output signal group (read data), ideally on the same layer
as the CQ/CQ# clocks, to within ±10 ps skew of the CQ/CQ# traces.
6.
The output and input groups do not need to have the same propagation delays,
but they must have all the signals matched closely within the respective groups.
The following tables list the typical margins for QDR II and QDR II+ SRAM interfaces,
with the assumption that there is zero skew between the signal groups.
Table 53.
Typical Worst Case Margins for QDR II SRAM Interfaces of Burst Length 2
Device
Speed Grade
Frequency (MHz)
Arria II GX
I5
250
± 240
± 80
± 170
Arria II GX
×36 emulated
I5
200
± 480
± 340
± 460
Stratix IV
—
350
—
—
—
Stratix IV
×36 emulated
C2
300
± 320
± 170
± 340
Table 54.
Typical Margin
Address/Command
(ps)
Typical Margin
Write Data (ps)
Typical Margin
Read Data (ps)
Typical Worst Case Margins for QDR II+ SRAM Interfaces of Burst Length 4
Device
Speed Grade
Frequency (MHz)
Typical Margin
Address/Command
(ps) (1)
Typical Margin
Write Data (ps)
Typical Margin
Read Data (ps)
Arria II GX
I5
250
± 810
± 150
± 130
Arria II GX
×36 emulated
I5
200
± 1260
± 410
± 420
Stratix IV
C2
400
± 550
± 10
± 80
Stratix IV
×36 emulated
C2
300
± 860
± 180
± 300
Note to Table:
1. The QDR II+ SRAM burst length of 4 designs have greater margins on the address signals because they are single data
rate.
External Memory Interface Handbook Volume 2: Design Guidelines
181
6 QDR II and QDR-IV SRAM Board Design Guidelines
Other devices and speed grades typically show higher margins than the ones in the
above tables.
Note:
Intel recommends that you create your project with a fully implemented QDR II or
QDR II+ SRAM Controller with UniPHY interface, and observe the interface timing
margins to determine the actual margins for your design.
Although the recommendations in this chapter are based on simulations, you can
apply the same general principles when determining the best termination scheme,
drive strength setting, and loading style to any board designs. Even armed with this
knowledge, it is still critical that you perform simulations, either using IBIS or HSPICE
models, to determine the quality of signal integrity on your designs.
6.6 Package Deskew for QDR II and QDR-IV
You should follow Intel's package deskew guidance if you are using Stratix V or Arria
10 devices.
For more information on package deskew, refer to Package Deskew.
6.7 QDR-IV Layout Approach
For all practical purposes, you can regard the TimeQuest timing analyzer's report on
your memory interface as definitive for a given set of memory and board timing
parameters. You will find timing under Report DDR in TimeQuest and on the Timing
Analysis tab in the parameter editor.
The following flowchart illustrates the recommended process to follow during the
design phase, to determine timing margin and make iterative improvements to your
design.
Primary Layout
Calculate Setup
and Hold Derating
Adjust Layout to Improve:
• Trace Length Mis-Match
• Signal Reflections (ISI)
• Cross Talk
• Memory Speed Grade
Calculate Channel
Signal Integrity
Calculate Board
Skews
Generate an IP Core that Accurately Represents Your
Memory Subsystem, Including pin-out and Accurate
Parameters in the Parameter Editor’s Board Settings Tab
Run Quartus Prime Compilation with the Generated IP Core
yes
Any Non-Core Timing
Violations in the Report
DDR Panel?
no
Done
External Memory Interface Handbook Volume 2: Design Guidelines
182
Find Memory
Timing Parameters
6 QDR II and QDR-IV SRAM Board Design Guidelines
For more detailed simulation guidance for Arria 10, refer to the wiki: http://
www.alterawiki.com/wiki/Arria_10_EMIF_Simulation_Guidance
Intersymbol Interference/Crosstalk
For information on intersymbol interference and crosstalk, refer to the wiki: http://
www.alterawiki.com/wiki/Arria_10_EMIF_Simulation_Guidance
Board Skew
For information on calculating board skew parameters, refer to
If you know the absolute delays for all the memory related traces, the interactive
Board Skew Parameter Tool can help you calculate the necessary parameters.
Memory Timing Parameters
You can find the memory timing parameters to enter in the parameter editor, in your
memory vendor's datasheet.
6.8 QDR-IV Layout Guidelines
Observe the following layout guidelines for your QDR-IV interface.These guidelines
apply for all device families that support QDR-IV, including Arria 10 and Stratix 10.
Parameter
General Routing
Guidelines
•
•
•
If you must route signals of the same net group on different layers with the same
impedance characteristic, simulate your worst case PCB trace tolerances to determine
actual propagation delay differences. Typical layer-to-layer trace delay variations are on
the order of 15 ps/inch.
Avoid T-junctions greater than 150 ps.
Match all signals within a given DQ group with a maximum skew of ±10 ps and route on
the same layer.
Clock Routing
•
•
•
•
•
Route clocks on inner layers with outer-layer run lengths held to less than 150 ps.
Clock signals should maintain a 10-mil (0.254 mm) spacing from other nets.
Clocks should maintain a length-matching between clock pairs of ±5 ps.
Differential clocks should maintain a length-matching between P and N signals of ±2 ps.
Space between different clock pairs should be at least three times the space between the
traces of a differential pair.
Address and Command
Routing
•
- To minimize crosstalk, route address, bank address, and command signals on a
different layer than the data signals.
Do not route the differential clock signals close to the address signals.
Keep the distance from the pin on the QDR-IV component to the stub termination
resistor (VTT) to less than 50 ps for the address/command signal group.
•
•
•
- Route the mem_ck (CK/CK#) clocks and set as the target trace propagation delays for
the address/command signal group. Match the CK/CK# clock to within ±50 ps of all the
DK/DK# clocks for both ports.
•
- Route the address/control signal group ideally on the same layer as the mem_ck (CK/
CK#) clocks, to within ±20 ps skew of the mem_ck (CK/CK#) traces.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
183
6 QDR II and QDR-IV SRAM Board Design Guidelines
Parameter
Data Signals
Guidelines
•
For port B only: Swap the polarity of the QKB and QKB# signals with respect to the
polarity of the differential buffer inputs on the FPGA. Connect the positive leg of the
differential input buffer on the FPGA to QDR-IV QKB# (negative) pin and vice-versa. Note
that the port names at the top-level of the IP already reflect this swap (that is, mem_qkb
is assigned to the negative buffer leg, and mem_qkb_n is assigned to the positive buffer
leg).
•
For each port, route the DK/DK# write clock and QK/QK# read clock associated with a DQ
group on the same PCB layer. Match these clock pairs to within ±5 ps.
•
For each port, set the DK/DK# or QK/QK# clock as the target trace propagation delay for
the associated data signals (DQ).
•
For each port, route the data (DQ) signals for the DQ group ideally on the same layer as
the associated QK/QK# and DK/DK# clocks to within ±10 ps skew of the target clock.
Maximum Trace Length
•
Keep the maximum trace length of all signals from the FPGA to the QDR-IV components
to 600 ps.
Spacing Guidelines
•
Avoid routing two signal layers next to each other. Always make sure that the signals
related to memory interface are routed between appropriate GND or power layers.
For Data and Data Strobe traces: Maintain at least 3H spacing between the edges (airgap) of these traces, where H is the vertical distance to the closest return path for that
particular trace.
For Address/Command/Control traces: Maintain at least 3H spacing between the edges
(air-gap) of these traces, where H is the vertical distance to the closest return path for
that particular trace.
•
•
•
Trace Matching Guidance
For Clock (mem_CK) traces: Maintain at least 5H spacing between two clock pair or a
clock pair and any other memory interface trace, where H is the vertical distance to the
closest return path for that particular trace.
The following layout approach is recommended, based on the preceding guidelines:
1. For port B only: Swap the polarity of the QKB and QKB# signals with respect to the
polarity of the differential buffer inputs on the FPGA. Connect the positive leg of the
differential input buffer on the FPGA to QDR-IV QKB# (negative) pin and vice-versa. Note
that the port names at the top-level of the IP already reflect this swap (that is, mem_qkb
is assigned to the negative buffer leg, and mem_qkb_n is assigned to the positive buffer
leg).
2. For each port, set the DK/DK# or QK/QK# clock as the target trace propagation delay for
the associated data signals (DQ).
3. For each port, route the data (DQ) signals for the DQ group ideally on the same layer as
the associated QK/QK# and DK/DK# clocks to within ±10 ps skew of the target clock.
4. Route the mem_ck (CK/CK#) clocks and set as the target trace propagation delays for the
address/command signal group. Match the CK/CK# clock to within ±50 ps of all the
DK/DK# clocks for both ports.
5. Route theaddress/control signal group ideally on the same layer as the mem_ck (CK/
CK#) clocks, to within ±10 ps skew of the mem_ck (CK/CK#) traces.
6.9 Document Revision History
Date
May 2017
Version
2017.5.08
Changes
•
•
Added Stratix 10 to QDR-IV Layout Guidelines section.
Rebranded as Intel.
October 2016
2016.10.31
Maintenance release.
May 2016
2016.05.02
Maintenance release.
November 2015
2015.11.02
Maintenance release.
May 2015
2015.05.04
In the first guideline of the QDR-IV Layout Recommendations and the Data
Signals section of the QDR-IV Layout Guidelines, revised the information for
Port B only.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
184
6 QDR II and QDR-IV SRAM Board Design Guidelines
Date
Version
Changes
December 2014
2014.12.15
Maintenance release.
August 2014
2014.08.15
•
•
•
Change to K/K# Clocks row in Termination Recommendations for Arria V,
Stratix III, Stratix IV, and Stratix V Devices table.
Removed millimeter approximations from lengths expressed in
picoseconds in QDR II and QDR II+ SRAM Layout Guidelines table.
Minor formatting fixes in QDR II and QDR II+ SRAM Layout Guidelines
table.
December 2013
2013.12.16
Consolidated General Layout Guidelines.
November 2012
4.2
Changed chapter number from 7 to 8.
June 2012
4.1
Added Feedback icon.
November 2011
4.0
Added Arria V information.
June 2011
3.0
Added Stratix V information.
December 2010
2.0
Maintenance release.
July 2010
1.0
Initial release.
External Memory Interface Handbook Volume 2: Design Guidelines
185
7 Implementing and Parameterizing Memory IP
7 Implementing and Parameterizing Memory IP
The following topics describe the general overview of the IP core design flow to help
you quickly get started with any IP core.
®
The IP Library is installed as part of the Quartus Prime installation process.You can
®
select and parameterize any Intel IP core from the library. Intel provides an
integrated parameter editor that allows you to customize IP cores to support a wide
variety of applications. The parameter editor guides you through the setting of
parameter values and selection of optional ports. The following section describes the
®
general design flow and use of Intel IP cores.
Note:
Information for Arria 10 External Memory Interface IP also applies to Arria 10 External
Memory Interface for HPS IP unless stated otherwise.
Related Links
Intel FPGA Design Store
Design Example for Arria 10 DDR3 External Memory Interface is available in the
Intel FPGA Design Store.
7.1 Installing and Licensing IP Cores
The Intel Quartus Prime software installation includes the Intel FPGA IP library. This
library provides useful IP core functions for your production use without the need for
an additional license. Some IP cores in the library require that you purchase a
separate license for production use. The OpenCore® feature allows evaluation of any
Intel FPGA IP core in simulation and compilation in the Quartus Prime software. Upon
satisfaction with functionality and performance, visit the Self Service Licensing Center
to obtain a license number for any Intel FPGA product.
The Quartus Prime software installs IP cores in the following locations by default:
Figure 60.
IP Core Installation Path
intelFPGA(_pro*)
quartus - Contains the Quartus Prime software
ip - Contains the IP library and third-party IP cores
altera - Contains the IP library source code
<IP core name> - Contains the IP core source files
Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus
and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other
countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in
accordance with Intel's standard warranty, but reserves the right to make changes to any products and services
at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any
information, product, or service described herein except as expressly agreed to in writing by Intel. Intel
customers are advised to obtain the latest version of device specifications before relying on any published
information and before placing orders for products or services.
*Other names and brands may be claimed as the property of others.
ISO
9001:2008
Registered
7 Implementing and Parameterizing Memory IP
Table 55.
IP Core Installation Locations
Location
Software
Platform
<drive>:\intelFPGA_pro\quartus\ip\altera
Quartus Prime Pro Edition
Windows*
<drive>:\intelFPGA\quartus\ip\altera
Quartus Prime Standard Edition
Windows
<home directory>:/intelFPGA_pro/quartus/ip/altera
Quartus Prime Pro Edition
Linux*
<home directory>:/intelFPGA/quartus/ip/altera
Quartus Prime Standard Edition
Linux
7.2 Design Flow
You can implement the external memory interface IP using the following flows:
•
IP Catalog flow
•
Qsys flow
The following figure shows the stages for creating a system in the Quartus Prime
software using the available flows.
External Memory Interface Handbook Volume 2: Design Guidelines
187
7 Implementing and Parameterizing Memory IP
Figure 61.
Design Flows
Select Design Flow
IP Catalog
Flow
Qsys Flow
Specify Parameters
Specify Parameters
Complete System
Optional
Perform
Functional Simulation
Does
Simulation Give
Expected Results?
Yes
Add Constraints
and Compile Design
IP Complete
Debug Design
Note to Figure:
The IP Catalog design flow is suited for simple designs where you want to manually
instantiate the external memory interface IP into a larger component. The Qsys design
flow is recommended for more complex system designs where you want the tool to
manage the instantiation process.
7.2.1 IP Catalog Design Flow
The IP Catalog design flow allows you to customize the external memory interface IP,
and manually integrate the function into your design.
7.2.1.1 IP Catalog and Parameter Editor
The IP Catalog displays the IP cores available for your project. Use the following
features of the IP Catalog to locate and customize an IP core:
External Memory Interface Handbook Volume 2: Design Guidelines
188
7 Implementing and Parameterizing Memory IP
•
Filter IP Catalog to Show IP for active device family or Show IP for all
device families. If you have no project open, select the Device Family in IP
Catalog.
•
Type in the Search field to locate any full or partial IP core name in IP Catalog.
•
Right-click an IP core name in IP Catalog to display details about supported
devices, to open the IP core's installation folder, and for links to IP documentation.
•
Click Search for Partner IP to access partner IP information on the web.
The parameter editor prompts you to specify an IP variation name, optional ports, and
output file generation options. The parameter editor generates a top-level Quartus
Prime IP file (.ip) for an IP variation in Quartus Prime Pro Edition projects.
The parameter editor generates a top-level Quartus IP file (.qip) for an IP variation
in Quartus Prime Standard Edition projects. These files represent the IP variation in
the project, and store parameterization information.
Figure 62.
IP Parameter Editor (Quartus Prime Pro Edition)
View IP Port
and Parameter
Details
Specify a Name for
your IP Variation
Apply Preset Parameters for
Specific Applications
External Memory Interface Handbook Volume 2: Design Guidelines
189
7 Implementing and Parameterizing Memory IP
Figure 63.
IP Parameter Editor (Quartus Prime Standard Edition)
View IP Port
and Parameter
Details
Specify IP Variation
Name and Target Device
7.2.1.2 Specifying Parameters for the IP Catalog Flow
To specify parameters with the IP Catalog design flow, perform the following steps:
1. In the Quartus Prime software, create a Quartus Prime project using the New
Project Wizard available from the File menu.
2.
Launch the IP Catalog from the Tools menu.
3.
Select an external memory interface IP from the Memory Interfaces and
Controllers folder in the Library list.
Note: The availability of external memory interface IP depends on the device
family your design is using.
4. Depending on the window which appears, proceed as follows:
5.
•
New IP Instance Window: Specify the Top-level Name and Device
Settings, and click Ok.
•
Save IP Variation window: Specify the IP variation file name and IP
variation file type, and click Ok.
In the Presets window, select the preset matching your design requirement, and
click Apply.
Tip: If none of the presets match your design requirements, you can apply the
closest preset available and then change the parameters manually. This
method may be faster than entering all the parameters manually, and reduces
the chance of having incorrect settings.
6.
Specify the parameters on all tabs.
External Memory Interface Handbook Volume 2: Design Guidelines
190
7 Implementing and Parameterizing Memory IP
Note: •
•
Tip: •
For detailed explanation of the parameters, refer to Parameterizing
Memory Controllers with UniPHY IP and Parameterizing Memory
Controllers with Arria 10 External Memory Interface IP.
Although you have applied presets, you may need to modify some of the
preset parameters depending on the frequency of operation. A typical
list of parameters which you might need to change includes the Memory
CAS Latency setting, the Memory CAS Write Latency setting, and the
tWTR, tFAW, tRRD, and tRTP settings.
As a good practice, review any warning messages displayed in the
Messages Window and correct any errors before making further
changes.
•
To simplify future work, you might want to store the current configuration
by saving your own presets. To create, modify, or remove your own
custom presets, click New, Update, or Delete at the bottom of the
Presets list.
•
If you want to generate an example design for your current configuration,
click Example Design at the top-right corner of the parameter editor,
specify a path for the example design, and click Ok.
7. Depending on which external memory interface IP is selected, perform the
following steps to complete the IP generation:
External Memory Interface Handbook Volume 2: Design Guidelines
191
7 Implementing and Parameterizing Memory IP
•
For Arria 10 or Stratix 10 External Memory Interface IP:
a.
Click Finish. Your configuration is saved as a .qsys file.
b.
Click Yes when you are prompted to generate your IP.
c.
Set Create HDL design files for synthesis to Verilog or VHDL.
Tip: If you want to do RTL simulation of your design, you should set
Create simulation model to either Verilog or VHDL. Some RTL
simulation-related files, including simulator-specific scripts, are
generated only if you specify this parameter.
Note: For Arria 10 External Memory Interface IP, the synthesis and
simulation model files are identical. However, there are some
differences in file types when generating for VHDL. For synthesis
files, only the top-level wrapper is generated in VHDL; the other
files are generated in System Verilog. For simulation files, all the
files are generated as a Mentor-tagged encrypted IP for VHDL-only
simulator support.
•
d.
Click Generate.
e.
When generation has completed, click Finish.
For UniPHY-based IP:
a.
Click the Finish button.
Note: The Finish button may be unavailable until you have corrected all
parameterization errors listed in the Messages window.
b.
If prompted, specify whether you want to generate an example design by
checking or unchecking Generate Example Design, and then click
Generate.
Caution: If you have already generated an example design, uncheck
Generate Example Design to prevent your previously
generated files from being overwritten.
c.
When generation is completed, click Exit.
8. Click Yes if you are prompted to add the .qip to the current Quartus Prime
project. You can also turn on Automatically add Quartus Prime IP Files to all
projects.
Tip: Always read the generated readme.txt file, which contains information and
guidelines specific to your configuration.
9. You can now integrate your custom IP core instance in your design, simulate, and
compile. While integrating your IP core instance into your design, you must make
appropriate pin assignments. You can create a virtual pin to avoid making specific
pin assignments for top-level signals while you are simulating and not ready to
map the design to hardware.
Note:
For information about the Quartus Prime software, including virtual pins and the IP
Catalog and Qsys, refer to Quartus Prime Help.
Related Links
•
Simulating Intel FPGA Designs
•
Quartus Prime Help
External Memory Interface Handbook Volume 2: Design Guidelines
192
7 Implementing and Parameterizing Memory IP
7.2.1.3 Using Example Designs
When you generate your IP, you can instruct the system to produce an example
design consisting of an external memory interface IP of your configuration, together
with a traffic generator.
For synthesis, the example design includes a project for which you can specify pin
locations and a target device, compile in the Quartus Prime software, verify timing
closure, and test on your board using the programming file generated by the Quartus
Prime assembler. For simulation, the example design includes an example memory
model with which you can run simulation and evaluate the result.
For a UniPHY-based external memory interface, click Example Design in the
parameter editor, or enable Generate Example Design. The system produces an
example design for synthesis in the example_project directory, and generation
scripts for simulation in the simulation directory. To generate the complete example
design for RTL simulation, follow the instructions in the readme.txt file in the
simulation directory.
For Arria 10 External Memory Interface IP, click Example Design in the parameter
editor. The system produces generation scripts in the directory path that you specify.
To create a complete example design for synthesis or RTL simulation, follow the
instructions in the generated <variation_name>/altera_emif_arch_nf_140/
<synth|sim>/<variation_name>_altera_emif_arch_nf_140_<unique
ID>_readme.txt file.
To compile an example design, open the .qpf file for the project and follow the
standard design flow, including constraining the design prior to full compilation. If
necessary, change the example project device to match the device in your project.
For more information about example designs, refer to Functional Description—Example
Top Level project in Volume 3 of the External Memory Interface Handbook. For more
information about simulating an example design, refer to Simulating the Example
Design in the Simulating Memory IP chapter.
7.2.1.4 Constraining the Design
For Arria 10 External Memory Interface IP for HPS, pin location assignments are
predefined in the Quartus Prime IP file (.qip). In UniPHY-based and non-HPS Arria 10
external memory interfaces, you must make your own location assignments.
Note:
You should not overconstrain any EMIF IP-related registers unless you are advised to
do so by Intel, or you fully understand the effect on the external memory interface
operation. Also, ensure that any wildcards in your user logic do not accidentally target
EMIF IP-related registers.
For more information about timing constraints and analysis, refer to Analyzing Timing
of Memory IP.
7.2.1.4.1 Adding Pins and DQ Group Assignments
The assignments defined in the <variation_name>_pin_assignments.tcl script
(for UniPHY-based IP) and the Quartus Prime IP file (.qip) (for Arria 10 EMIF IP) help
you to set up the I/O standards and the input/output termination for the external
memory interface IP. These assignments also help to relate the DQ pin groups together
for the Quartus Prime Fitter to place them correctly.
External Memory Interface Handbook Volume 2: Design Guidelines
193
7 Implementing and Parameterizing Memory IP
•
For UniPHY-based external memory interfaces, run the
<variation_name>_pin_assignments.tcl script to apply the input and
output termination, I/O standards, and DQ group assignments to your design. To
run the pin assignment script, follow these steps:
a.
On the Processing menu, point to Start, and click Start Analysis and
Synthesis. Allow Analysis and Synthesis to finish without errors before
proceeding to step 2.
b.
On the Tools menu click Tcl Scripts.
c.
Specify the pin_assignments.tcl and click Run.
The pin assignment script does not create a PLL reference clock for the design.
You must create a clock for the design and provide pin assignments for the signals
of both the example driver and testbench that the IP core variation generates.
Note: For some UniPHY-based IP configurations, the afi_clk clock does not have
a global signal assignment constraint. In this case, you should add a
suitable assignment for your design. For example, for a UniPHY-based DDR3
IP targeting a Stratix IV device, if0|pll0|upll_memphy|
auto_generated|clk[0] does not have a global signal assignment and
you should consider adding either a global clock or a dual regional clock
assignment to your project for this clock.
Note:
•
For Arria 10 External Memory Interface IP, the Quartus Prime software
automatically reads assignments from the .qip file during compilation, so it is not
necessary to apply assignments to your design manually.
•
If you must overwrite the default assignments, ensure that you make your
changes in the Quartus Prime Settings File (.qsf) and not the .qip file.
Assignments in the .qsf file take precedence over assignments in the .qip file. Note
also, that if you rerun the <variation_name>_pin_assignments.tcl file, it
overwrites your changes.
•
If the PLL input reference clock pin does not have the same I/O standard as the
memory interface I/Os, a no-fit might occur because incompatible I/O standards
cannot be placed in the same I/O bank.
•
If you are upgrading your memory IP from an earlier Quartus Prime version, rerun
the pin_assignments.tcl script in the later Quartus Prime revision.
•
If you encounter a shortage of clock resources, the AFI clock domain can be
moved between regional, dual-regional, and global. Moving any other clock
domain can result in fit errors or timing closure problems.
7.2.1.5 Compiling the Design
After constraining your design, compile your design in the Quartus Prime software to
generate timing reports to verify whether timing has been met.
To compile the design, on the Processing menu, click Start Compilation.
After you have compiled the top-level file, you can perform RTL simulation or program
your targeted Intel device to verify the top-level file in hardware.
External Memory Interface Handbook Volume 2: Design Guidelines
194
7 Implementing and Parameterizing Memory IP
Note:
In UniPHY-based memory controllers, the derive_pll_clocks command can affect
timing closure if it is called before the memory controller files are loaded. Ensure that
the Quartus Prime IP File (.qip) appears in the file list before any Synopsys Design
Constraint Files (.sdc) files that contain derive_pll_clocks.
For more information about simulating the memory IP, refer to Simulating Memory IP.
7.2.2 Qsys System Integration Tool Design Flow
You can use the Qsys system integration tool to build a system that includes your
customized IP core.
You easily can add other components and quickly create a Qsys system. Qsys
automatically generates HDL files that include all of the specified components and
interconnections. In Qsys, you specify the connections you want. The HDL files are
ready to be compiled by the Quartus Prime software to produce output files for
programming an Intel device. Qsys generates Verilog HDL simulation models for the IP
cores that comprise your system.
The following figure shows a high level block diagram of an example Qsys system.
Figure 64.
Example Qsys System
Qsys System
PCIe to Ethernet Bridge
DDR3
SDRAM
DDR3
SDRAM
Controller
PHY
Cntl
Mem
Mstr
PCI Express
Subsystem
PCIe
CSR
Mem
Slave
Embedded Cntl
Mem
Mstr
CSR
Ethernet
Subsystem
Ethernet
For more information about the Qsys system interconnect, refer to the Qsys
Interconnect chapter in volume 1 of the Quartus Prime Handbook and to the Avalon
Interface Specifications .
For more information about the Qsys tool and the Quartus Prime software, refer to the
System Design with Qsys section in volume 1 of the Quartus Prime Handbook and to
Quartus Prime Help.
Related Links
•
Qsys Interconnect
•
Avalon Interface Specifications
•
System Design with Qsys
External Memory Interface Handbook Volume 2: Design Guidelines
195
7 Implementing and Parameterizing Memory IP
7.2.2.1 Specify Parameters for the Qsys Flow
To specify parameters for your IP core using the Qsys flow, follow these steps:
1.
In the Quartus Prime software, create a new Quartus Prime project using the New
Project Wizard available from the File menu.
2. On the Tools menu, click Qsys.
Note: Qsys automatically sets device parameters based on your Quartus Prime
project settings. To set device parameters manually, use the Device Family
tab.
3.
In the IP Catalog, select the available external memory interface IP from the
Memory Interfaces and Controllers folder in the Library list. (For Arria 10
EMIF for HPS, select the external memory interface IP from the Hard Processor
Components folder.) The relevant parameter editor appears.
Note: The availability of external memory interface IP depends on the device
family your design is using. To use Arria 10 External Memory Interface for
HPS IP, your design must target a device containing at least one HPS CPU
core.
4. From the Presets list, select the preset matching your design requirement, and
click Apply.
Tip: If none of the presets match your design requirements, you can apply the
closest preset available and then change the inappropriate parameters
manually. This method may be faster than entering all the parameters
manually, and reduces the chance of having incorrect settings.
5. Specify the parameters on all tabs.
Note: •
Tip: •
For detailed explanation of the parameters, refer to Parameterizing
Memory Controllers with UniPHY IP and Parameterizing Memory
Controllers with Arria 10 External Memory Interface IP.
•
Although you have applied presets, you may need to modify some of the
preset parameters depending on the frequency of operation. A typical
list of parameters which you might need to change includes the Memory
CAS Latency setting, the Memory CAS Write Latency setting, and the
tWTR, tFAW, tRRD, and tRTP settings.
•
For UniPHY-based IP, turn on Generate power-of-2 bus widths for
Qsys or SOPC Builder on the Controller Settings tab.
As a good practice, review any warning messages displayed in the
Messages Window and correct any errors before making further
changes.
•
To simplify future work, you might want to store the current configuration
by saving your own presets. To create, modify, or remove your own
custom presets, click New, Update, or Delete at the bottom of the
Presets list.
•
If you want to generate an example design for your current configuration,
click Example Design at the top-right corner of the parameter editor,
specify a path for the example design, and click Ok.
6. Click Finish to complete the external memory interface IP instance and add it to
the system.
Note: The Finish button may be unavailable until you have corrected all
parameterization errors listed in the Messages window.
External Memory Interface Handbook Volume 2: Design Guidelines
196
7 Implementing and Parameterizing Memory IP
7.2.2.2 Completing the Qsys System
To complete the Qsys system, follow these steps:
1. Add and parameterize any additional components.
2. Connect the components using the Connection panel on the System Contents
tab.
3. In the Export column, enter the name of any connections that should be a toplevel Qsys system port.
Note: Ensure that the memory and oct interfaces are exported to the top-level
Qsys system port. If these interfaces are already exported, take care not to
accidentally rename or delete either of them in the Export column of the
System Contents tab.
4. Click Finish.
5.
Specify the File Name and click Save.
6. When you are prompted to generate now, click Yes.
7. Set Create HDL design files for synthesis to either Verilog or VHDL.
Tip: If you want to do RTL simulation of your design, you should set Create
simulation model to either Verilog or VHDL. Some RTL simulation-related
files, including simulator-specific scripts, are generated only if you specify this
parameter.
Note: For Arria 10 External Memory Interface IP, the synthesis and simulation
model files are identical. However, there are some differences in file types
when generating for VHDL. For synthesis files, only the top-level wrapper is
generated in VHDL; the other files are generated in System Verilog. For
simulation files, all the files are generated as a Mentor-tagged encrypted IP
for VHDL-only simulator support.
8. Click Generate.
9.
When generation has completed, click Finish.
10. If you are prompted to add the .qip file to the current Quartus Prime project,
click Yes (If you want, you can turn on Automatically Add Quartus Prime IP
Files to all projects).
Tip: Always read the generated readme.txt file, because it contains information
and guidelines specific to your configuration.
You can now simulate and compile your design. But before compilation, you must
make approrriate pin assignments. You can create a virtual pin to avoid making
specific pin assignments for top-level signals during simulation and not yet ready to
map the design to hardware.
For information about the Quartus Prime software, including virtual pins and the IP
Catalog and Qsys, refer to the Quartus Prime Help.
7.3 UniPHY-Based External Memory Interface IP
This section contains information about parameterizing UniPHY-based external
memory interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
197
7 Implementing and Parameterizing Memory IP
7.3.1 Qsys Interfaces
The following tables list the signals available for each interface in Qsys, and provide a
description and guidance on connecting those interfaces.
7.3.1.1 DDR2 SDRAM Controller with UniPHY Interfaces
The following table lists the DDR2 SDRAM with UniPHY signals available for each
interface in Qsys and provides a description and guidance on how to connect those
interfaces.
Table 56.
DDR2 SDRAM Controller with UniPHY Interfaces
Signals in Interface
Interface Type
Description/How to Connect
pll_ref_clk interface
pll_ref_clk
Clock input
PLL reference clock input.
Reset input
Asynchronous global reset for PLL and all logic in
PHY.
Reset input
Asynchronous reset input. Resets the PHY, but not
the PLL that the PHY uses.
Reset output (PLL
master/no sharing)
When the interface is in PLL master or no sharing
modes, this interface is an asynchronous reset
output of the AFI interface. The controller asserts
this interface when the PLL loses lock or the PHY is
reset.
Reset output (PLL
master/no sharing)
This interface is a copy of the afi_reset interface. It
is intended to be connected to PLL sharing slaves.
Reset input (PLL slave)
When the interface is in PLL slave mode, this
interface is a reset input that you must connect to
the afi_reset_export_n output of an identically
configured memory interface in PLL master mode.
Clock output (PLL
master/no sharing)
This AFI interface clock can be a full-rate or halfrate memory clock frequency based on the memory
interface parameterization. When the interface is in
PLL master or no sharing modes, this interface is a
clock output.
Clock input (PLL slave)
This AFI interface clock can be a full-rate or halfrate memory clock frequency based on the memory
interface parameterization. When the interface is in
PLL slave mode, you must connect this afi_clk
input to the afi_clk output of an identically
configured memory interface in PLL master mode.
global_reset interface
global_reset_n
soft_reset interface
soft_reset_n
afi_reset interface
afi_reset_n
afi_reset_export interface
afi_reset_export_n
afi_reset_in interface
afi_reset_n
afi_clk interface
afi_clk
afi_clk_in interface
afi_clk
afi_half_clk interface
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
198
7 Implementing and Parameterizing Memory IP
Signals in Interface
afi_half_clk
Interface Type
Description/How to Connect
Clock output (PLL
master/no sharing)
The AFI half clock that is half the frequency of
afi_clk.When the interface is in PLL master or no
sharing modes, this interface is a clock output.
Clock input (PLL slave)
The AFI half clock that is half the frequency of
afi_clk.When the interface is in PLL slave mode,
this is a clock input that you must connect to the
afi_half_clk output of an identically configured
memory interface in PLL master mode.
Conduit
Interface signals between the PHY and the memory
device.
Conduit
Interface signals between the PHY and the memory
device.
afi_half_clk_in interface
afi_half_clk
memory interface (DDR2 SDRAM)
mem_a
mem_ba
mem_ck
mem_ck_n
mem_cke
mem_cs_n
mem_dm
mem_ras_n
mem_cas_n
mem_we_n
mem_dq
mem_dqs
mem_dqs_n
mem_odt
mem_ac_parity
mem_err_out_n
mem_parity_error_n
memory interface (LPDDR2)
mem_ca
mem_ck
mem_ck_n
mem_cke
mem_cs_n
mem_dm
mem_dq
mem_dqs
mem_dqs_n
avl interface
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
199
7 Implementing and Parameterizing Memory IP
Signals in Interface
avl_ready
Interface Type
Description/How to Connect
Avalon-MM Slave
Avalon-MM interface signals between the memory
interface and user logic.
Conduit
Memory interface status signals.
Conduit
OCT reference resistor pins for rup/rdn or rzqin.
Conduit
This powerdown interface for the controller is
enabled only when you turn on Enable Auto
Powerdown.
Conduit
Interface signals for PLL sharing, to connect PLL
masters to PLL slaves. This interface is enabled only
when you set PLL sharing mode to master or
slave.
Conduit
DLL sharing interface for connecting DLL masters to
DLL slaves. This interface is enabled only when you
set DLL sharing mode to master or slave.
avl_burst_begin
avl_addr
avl_rdata_valid
avl_rdata
avl_wdata
avl_be
avl_read_req
avl_write_req
avl_size
status interface
local_init_done
local_cal_success
local_cal_fail
oct interface
®
®
rup (Stratix III/IV, Arria II GZ)
rdn (Stratix III/IV, Arria II GZ)
rzq (Stratix V, Arria V, Cyclone V)
local_powerdown interface
local_powerdn_ack
pll_sharing interface
pll_mem_clk
pll_write_clk
pll_addr_cmd_clk
pll_locked
pll_avl_clk
pll_config_clk
pll_hr_clk
pll_p2c_read_clk
pll_c2p_write_clk
pll_dr_clk
dll_sharing interface
dll_delayctrl
dll_pll_locked
oct_sharing interface
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
200
7 Implementing and Parameterizing Memory IP
Signals in Interface
seriesterminationcontrol
Interface Type
Description/How to Connect
Conduit
OCT sharing interface for connecting OCT masters
to OCT slaves. This interface is enabled only when
you set OCT sharing mode to master or slave.
Conduit
Precharge interface for connection to a custom
control block. This interface is enabled only when
you turn on Auto precharge Control.
Conduit
User refresh interface for connection to a custom
control block. This interface is enabled only when
you turn on User Auto-Refresh Control.
Conduit
Self refresh interface for connection to a custom
control block. This interface is enabled only when
you turn on Self-refresh Control.
Conduit
ECC interrupt signal for connection to a custom
control block. This interface is enabled only when
you turn on Error Detection and Correction
Logic.
Avalon-MM Slave
Configuration and status register signals for the
memory interface, for connection to an Avalon_MM
master. This interface is enabled only when you
turn on Configuration and Status Register.
parallelterminationcontrol
autoprecharge_req interface
local_autopch_req
user_refresh interface
local_refresh_req
local_refresh_chip
local_refresh_ack
self_refresh interface
local_self_rfsh_req
local_self_rfsh_chip
local_self_rfsh_ack
ecc_interrupt interface
ecc_interrupt
csr interface
csr_write_req
csr_read_req
csr_waitrequest
csr_addr
csr_be
csr_wdata
csr_rdata
csr_rdata_valid
Hard Memory Controller MPFE FIFO Clock Interface
mp_cmd_clk
Conduit
mp_rfifo_clk
mp_wfifo_clk
mp_cmd_reset
When you enable the Hard Memory Interface, three
FIFO buffers (command, read data, and write data)
are created in the MPFE. Each FIFO buffer has its
own clock and reset port.
This interface is enabled when you turn on the
Enable Hard Memory Interface.
mp_rfifo_reset
mp_wfifo_reset
Hard Memory Controller Bonding Interface
bonding_in_1
bonding_in_2
Conduit
Bonding interface to bond two controllers to expand
the bandwidth. This interface is enabled when you
turn on the Export bonding interface.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
201
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
bonding_in_3
bonding_out_1
bonding_out_2
bonding_out_3
Note to Table:
1. Signals available only in DLL master mode.
7.3.1.2 DDR3 SDRAM Controller with UniPHY Interfaces
The following table lists the DDR3 SDRAM with UniPHY signals available for each
interface in Qsys and provides a description and guidance on how to connect those
interfaces.
Table 57.
DDR3 SDRAM Controller with UniPHY Interfaces
Signals in Interface
Interface Type
Description/How to Connect
pll_ref_clk interface
pll_ref_clk
Clock input
PLL reference clock input.
Reset input
Asynchronous global reset for PLL and all logic in PHY.
Reset input
Asynchronous reset input. Resets the PHY, but not the PLL
that the PHY uses.
Reset output (PLL
master/no sharing)
When the interface is in PLL master or no sharing modes,
this interface is an asynchronous reset output of the AFI
interface. This interface is asserted when the PLL loses lock
or the PHY is reset.
Reset output (PLL
master/no sharing)
This interface is a copy of the afi_reset interface. It is
intended to be connected to PLL sharing slaves.
Reset input (PLL slave)
When the interface is in PLL slave mode, this interface is a
reset input that you must connect to the
afi_reset_export_n output of an identically configured
memory interface in PLL master mode.
Clock output (PLL
master/no sharing)
This AFI interface clock can be full-rate or half-rate
memory clock frequency based on the memory interface
parameterization. When the interface is in PLL master or no
sharing modes, this interface is a clock output.
Clock input (PLL slave)
This AFI interface clock can be full-rate or half-rate
memory clock frequency based on the memory interface
parameterization. When the interface is in PLL slave mode,
global_reset interface
global_reset_n
soft_reset interface
soft_reset_n
afi_reset interface
afi_reset_n
afi_reset_export interface
afi_reset_export_n
afi_reset_in interface
afi_reset_n
afi_clk interface
afi_clk
afi_clk_in interface
afi_clk
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
202
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
this is a clock input that you must connect to the afi_clk
output of an identically configured memory interface in PLL
master mode.
afi_half_clk interface
afi_half_clk
Clock output (PLL
master/no sharing)
The AFI half clock that is half the frequency of
afi_clk.When the interface is in PLL master or no sharing
modes, this interface is a clock output.
afi_half_clk_in interface
afi_half_clk
Clock input (PLL slave)
The AFI half clock that is half the frequency of the
afi_clk.When the interface is in PLL slave mode, you
must connect this afi_half_clk inputto the
afi_half_clk output of an identically configured memory
interface in PLL master mode.
memory interface
mem_a
Conduit
Interface signals between the PHY and the memory device.
Avalon-MM Slave
Avalon-MM interface signals between the memory interface
and user logic.
mem_ba
mem_ck
mem_ck_n
mem_cke
mem_cs_n
mem_dm
mem_ras_n
mem_cas_n
mem_we_n
mem_dq
mem_dqs
mem_dqs_n
mem_odt
mem_reset_n
mem_ac_parity
mem_err_out_n
mem_parity_error_n
avl interface
avl_ready
avl_burst_begin
avl_addr
avl_rdata_valid
avl_rdata
avl_wdata
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
203
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
avl_be
avl_read_req
avl_write_req
avl_size
status interface
local_init_done
Conduit
Memory interface status signals.
Conduit
OCT reference resistor pins for rup/rdn or rzqin.
Conduit
This powerdown interface for the controller is enabled only
when you turn on Enable Auto Power Down.
Conduit
Interface signals for PLL sharing, to connect PLL masters to
PLL slaves. This interface is enabled only when you set PLL
sharing mode to master or slave.
Conduit
DLL sharing interface for connecting DLL masters to DLL
slaves. This interface is enabled only when you set DLL
sharing mode to master or slave.
Conduit
OCT sharing interface for connecting OCT masters to OCT
slaves. This interface is enabled only when you set OCT
sharing mode to master or slave.
Conduit
Precharge interface for connection to a custom control
block. This interface is enabled only when you turn on
Auto-precharge Control.
local_cal_success
local_cal_fail
oct interface
rup (Stratix III/IV, Arria II GZ)
rdn (Stratix III/IV, Arria II GZ)
rzq (Stratix V, Arria v, Cyclone
V)
local_powerdown interface
local_powerdn_ack
pll_sharing interface
pll_mem_clk
pll_write_clk
pll_addr_cmd_clk
pll_locked
pll_avl_clk
pll_config_clk
pll_hr_clk
pll_p2c_read_clk
pll_c2p_write_clk
pll_dr_clk
dll_sharing interface
dll_delayctrl
dll_pll_locked
oct_sharing interface
seriesterminationcontrol
parallelterminationcontrol
autoprecharge_req interface
local_autopch_req
user_refresh interface
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
204
7 Implementing and Parameterizing Memory IP
Signals in Interface
local_refresh_req
Interface Type
Description/How to Connect
Conduit
User refresh interface for connection to a custom control
block. This interface is enabled only when you turn on User
Auto Refresh Control.
Conduit
Self refresh interface for connection to a custom control
block. This interface is enabled only when you turn on
Self-refresh Control.
Conduit
ECC interrupt signal for connection to a custom control
block. This interface is enabled only when you turn on
Error Detection and Correction Logic.
Avalon-MM Slave
Configuration and status register signals for the memory
interface, for connection to an Avalon_MM master. This
interface is enabled only when you turn on Configuration
and Status Register.
local_refresh_chip
local_refresh_ack
self_refresh interface
local_self_rfsh_req
local_self_rfsh_chip
local_self_rfsh_ack
ecc_interrupt interface
ecc_interrupt
csr interface
csr_write_req
csr_read_req
csr_waitrequest
csr_addr
csr_be
csr_wdata
csr_rdata
csr_rdata_valid
Hard Memory Controller MPFE FIFO Clock Interface
mp_cmd_clk
Conduit
mp_rfifo_clk
mp_wfifo_clk
mp_cmd_reset_n
When you enable the Hard Memory Interface, three FIFO
buffers (command, read data, and write data) are created
in the MPFE. Each FIFO buffer has its own clock and reset
port.
This interface is enabled when you turn on the Enable Hard
Memory Interface.
mp_rfifo_reset_n
mp_wfifo_reset_n
Hard Memory Controller Bonding Interface
bonding_in_1
Conduit
bonding_in_2
Use bonding interface to bond two controllers to expand
the bandwidth. This interface is enabled when you turn on
the Export bonding interface.
bonding_in_3
bonding_out_1
bonding_out_2
bonding_out_3
Note to Table:
1. Signals available only in DLL master mode.
External Memory Interface Handbook Volume 2: Design Guidelines
205
7 Implementing and Parameterizing Memory IP
7.3.1.3 LPDDR2 SDRAM Controller with UniPHY Interfaces
The following table lists the LPDDR2 SDRAM signals available for each interface in
Qsys and provides a description and guidance on how to connect those interfaces.
Table 58.
LPDDR2 SDRAM Controller with UniPHY Interfaces
Signals in Interface
Interface Type
Description/How to Connect
pll_ref_clk interface
pll_ref_clk
Clock input
PLL reference clock input.
Reset input
Asynchronous global reset for PLL and all logic in PHY.
Reset input
Asynchronous reset input. Resets the PHY, but not the PLL that
the PHY uses.
Reset output (PLL
master/no sharing)
When the interface is in PLL master or no sharing modes, this
interface is an asynchronous reset output of the AFI interface.
The controller asserts this interface when the PLL loses lock or
the PHY is reset.
global_reset interface
global_reset_n
soft_reset interface
soft_reset_n
afi_reset interface
afi_reset_n
afi_reset_export interface
afi_reset_export_n
Reset output (PLL
master/no sharing)
This interface is a copy of the afi_reset interface. It is intended
to be connected to PLL sharing slaves.
Reset input (PLL slave)
When the interface is in PLL slave mode, this interface is a reset
input that you must connect to the afi_reset_export_n
output of an identically configured memory interface in PLL
master mode.
Clock output (PLL
master/no sharing)
This AFI interface clock can be a full-rate or half-rate memory
clock frequency based on the memory interface
parameterization. When the interface is in PLL master or no
sharing modes, this interface is a clock output.
Clock input (PLL slave)
This AFI interface clock can be a full-rate or half-rate memory
clock frequency based on the memory interface
parameterization. When the interface is in PLL slave mode, you
must connect this afi_clk input to the afi_clk output of an
identically configured memory interface in PLL master mode.
Clock output (PLL
master/no sharing)
The AFI half clock that is half the frequency of afi_clk. When the
interface is in PLL master or no sharing modes, this interface is
a clock output.
Clock input (PLL slave)
The AFI half clock that is half the frequency of afi_clk. When the
interface is in PLL slave mode, this is a clock input that you
must connect to the afi_half_clk output of an identically
configured memory interface in PLL master mode.
afi_reset_in interface
afi_reset_n
afi_clk interface
afi_clk
afi_clk_in interface
afi_clk
afi_half_clk interface
afi_half_clk
afi_half_clk_in interface
afi_half_clk
Memory interface
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
206
7 Implementing and Parameterizing Memory IP
Signals in Interface
mem_ca
Interface Type
Description/How to Connect
Conduit
Interface signals between the PHY and the memory device.
Avalon-MM Slave
Avalon-MM interface signals between the memory interface and
user logic.
Conduit
Memory interface status signals.
Conduit
OCT reference resistor pins for rzqin.
mem_ck
mem_ck_n
mem_cke
mem_cs_n
mem_dm
mem_dq
mem_dqs
mem_dqs_n
avl interface
avl_ready
avl_burst_begin
avl_addr
avl_rdata_valid
avl_rdata
avl_wdata
avl_be
avl_read_req
avl_write_req
avl_size
status interface
local_init_done
local_cal_success
local_cal_fail
oct interface
rzq
local_powerdown interface
local_powerdn_ack
Conduit
This powerdown interface for the controller is enabled only
when you turn on Enable Auto Powerdown.
local_deep_powerdn interface
local_deep_powerdn_ack
Conduit
Deep power down interface for the controller to enable deep
power down. This interface is enable when turn on Enable Deep
Power-Down Controls.
Conduit
Interface signals for PLL sharing, to connect PLL masters to PLL
slaves. This interface is enabled only when you set PLL sharing
mode to master or slave.
local_deep_powerdn_chip
local_deep_powerdn_req
pll_sharing interface
pll_mem_clk
pll_write_clk
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
207
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
pll_addr_cmd_clk
pll_locked
pll_avl_clk
pll_config_clk
pll_mem_phy_clk
afi_phy_clk
pll_write_clk_pre_phy_clk
dll_sharing interface
dll_delayctrl
Conduit
DLL sharing interface for connecting DLL masters to DLL slaves.
This interface is enabled only when you set DLL sharing mode to
master or slave.
Conduit
OCT sharing interface for connecting OCT masters to OCT
slaves. This interface is enabled only when you set OCT sharing
mode to master or slave.
dll_pll_locked
oct_sharing interface
seriesterminationcontrol
parallelterminationcontrol
autoprecharge_req interface
local_autopch_req
Conduit
Precharge interface for connection to a custom control block.
This interface is enabled only when you turn on Auto-precharge
Control.
Conduit
User refresh interface for connection to a custom control block.
This interface is enabled only when you turn on User AutoRefresh Control.
Conduit
Self refresh interface for connection to a custom control block.
This interface is enabled only when you turn on Self-refresh
Control.
Conduit
ECC interrupt signal for connection to a custom control block.
This interface is enabled only when you turn on Error Detection
and Correction Logic.
Avalon-MM Slave
Configuration and status register signals for the memory
interface, for connection to an Avalon_MM master. This interface
is enabled only when you turn on Configuration and Status
Register.
user_refresh interface
local_refresh_req
local_refresh_chip
local_refresh_ack
self_refresh interface
local_self_rfsh_req
local_self_rfsh_chip
local_self_rfsh_ack
ecc_interrupt interface
ecc_interrupt
csr interface
csr_write_req
csr_read_req
csr_waitrequest
csr_addr
csr_be
csr_wdata
csr_rdata
csr_rdata_valid
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
208
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
Local_rdata_error interface
Local_rdata_error
Conduit
Indicates read data error when Error Detection and Correction
logic is enabled.
Hard Memory Controller MPFE FIFO Clock Interface
mp_cmd_clk
Conduit
When you enable the Hard Memory Interface, three FIFO
buffers (command, read data, and write data) are created in the
MPFE. Each FIFO buffer has its own clock and reset port.
This interface is enabled when you turn on the Enable Hard
Memory Interface.
mp_rfifo_clk
mp_wfifo_clk
mp_cmd_reset_n
mp_rfifo_reset_n
mp_wfifo_reset_n
Hard Memory Controller Bonding Interface
bonding_in_1
Conduit
Bonding interface to bond two controllers to expand the
bandwidth. This interface is enabled when you turn on the
Export bonding interface.
bonding_in_2
bonding_in_3
bonding_out_1
bonding_out_2
bonding_out_3
7.3.1.4 QDR II and QDR II+ SRAM Controller with UniPHY Interfaces
The following table lists the QDR II and QDR II+ SRAM signals available for each
interface in Qsys and provides a description and guidance on how to connect those
interfaces.
Table 59.
QDR II and QDR II+ SRAM Controller with UniPHY Interfaces
Signals in Interface
Interface Type
Description/How to Connect
pll_ref_clk interface
pll_ref_clk
Clock input
PLL reference clock input.
Reset input
Asynchronous global reset for PLL and all logic in
PHY.
Reset input
Asynchronous reset input. Resets the PHY, but not
the PLL that the PHY uses.
Reset output (PLL
master/no sharing)
When the interface is in PLL master or no sharing
modes, this interface is an asynchronous reset
output of the AFI interface. This interface is
asserted when the PLL loses lock or the PHY is
reset.
global_reset interface
global_reset_n
soft_reset interface
soft_reset_n
afi_reset interface
afi_reset_n
afi_reset_export interface
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
209
7 Implementing and Parameterizing Memory IP
Signals in Interface
afi_reset_export_n
Interface Type
Description/How to Connect
Reset output (PLL
master/no sharing)
This interface is a copy of the afi_reset interface.
It is intended to be connected to PLL sharing
slaves.
Reset input (PLL slave)
When the interface is in PLL slave mode, this
interface is a reset input that you must connect to
the afi_reset_export_n output of an
identically configured memory interface in PLL
master mode.
Clock output (PLL
master/no sharing)
This AFI interface clock can be full-rate or half-rate
memory clock frequency based on the memory
interface parameterization. When the interface is
in PLL master or no sharing modes, this interface
is a clock output.
Clock input (PLL slave)
This AFI interface clock can be full-rate or half-rate
memory clock frequency based on the memory
interface parameterization. When the interface is
in PLL slave mode, this is a clock input that you
must connect to the afi_clk output of an
identically configured memory interface in PLL
master mode.
Clock output (PLL
master/no sharing)
The AFI half clock that is half the frequency of
afi_clk.When the interface is in PLL master or
no sharing modes, this interface is a clock output.
Clock input (PLL slave)
The AFI half clock that is half the frequency of
afi_clk.When the interface is in PLL slave mode,
you must connect this afi_half_clk input to
the afi_half_clk output of an identically
configured memory interface in PLL master mode.
Conduit
Interface signals between the PHY and the
memory device.
afi_reset_in interface
afi_reset_n
afi_clk interface
afi_clk
afi_clk_in interface
afi_clk
afi_half_clk interface
afi_half_clk
afi_half_clk_in interface
afi_half_clk
memory interface
mem_a
mem_cqn
mem_bws_n
mem_cq
The sequencer holds mem_doff_n low during
initialization to ensure that internal PLL and DLL
circuits in the memory device do not lock until
clock signals have stabilized.
mem_d
mem_k
mem_k_n
mem_q
mem_wps_n
mem_rps_n
mem_doff_n
avl_r interface
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
210
7 Implementing and Parameterizing Memory IP
Signals in Interface
avl_r_read_req
Interface Type
Description/How to Connect
Avalon-MM Slave
Avalon-MM interface between memory interface
and user logic for read requests.
Avalon-MM Slave
Avalon-MM interface between memory interface
and user logic for write requests.
Conduit
Memory interface status signals.
Conduit
OCT reference resistor pins for rup/rdn or rzqin.
Conduit
Interface signals for PLL sharing, to connect PLL
masters to PLL slaves. This interface is enabled
only when you set PLL sharing mode to master
or slave.
Conduit
DLL sharing interface for connecting DLL masters
to DLL slaves. This interface is enabled only when
you set DLL sharing mode to master or slave.
avl_r_ready
avl_r_addr
avl_r_size
avl_r_rdata_valid
avl_r_rdata
avl_w interface
avl_w_write_req
avl_w_ready
avl_w_addr
avl_w_size
avl_w_wdata
avl_w_be
status interface
local_init_done
local_cal_success
local_cal_fail
oct interface
rup (Stratix III/IV, Arria II GZ,
Arria II GX)
rdn (Stratix III/IV, Arria II GZ,
Arria II GX)
rzq (Stratix V, Arria V, Cyclone V)
pll_sharing interface
pll_mem_clk
pll_write_clk
pll_addr_cmd_clk
pll_locked
pll_avl_clk
pll_config_clk
pll_hr_clk
pll_p2c_read_clk
pll_c2p_write_clk
pll_dr_clk
dll_sharing interface
dll_delayctrl
dll_pll_locked
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
211
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
oct_sharing interface
seriesterminationcontrol(Stratix III/IV/
V, Arria II GZ, Arria V, Cyclone V)
Conduit
parallelterminationcontrol
(Stratix III/IV/V, Arria II GZ, Arria V,
Cyclone V)
OCT sharing interface for connecting OCT masters
to OCT slaves. This interface is enabled only when
you set OCT sharing mode to master or slave.
terminationcontrol (Arria II GX)
Note to Table:
1. Signals available only in DLL master mode.
7.3.1.5 RLDRAM II Controller with UniPHY Interfaces
The following table lists the RLDRAM II signals available for each interface in Qsys and
provides a description and guidance on how to connect those interfaces.
Table 60.
RLDRAM II Controller with UniPHY Interfaces
Interface Name
Interface Type
Description
pll_ref_clk interface
pll_ref_clk
Clock input.
PLL reference clock input.
Reset input
Asynchronous global reset for PLL and all logic in PHY.
Reset input
Asynchronous reset input. Resets the PHY, but not the PLL
that the PHY uses.
Reset output (PLL
master/no sharing)
When the interface is in PLL master or no sharing modes,
this interface is an asynchronous reset output of the AFI
interface. This interface is asserted when the PLL loses lock
or the PHY is reset.
Reset output (PLL
master/no sharing)
This interface is a copy of the afi_reset interface. It is
intended to be connected to PLL sharing slaves.
Reset input (PLL slave)
When the interface is in PLL slave mode, this interface is a
reset input that you must connect to the
afi_reset_export_n output of an identically configured
memory interface in PLL master mode.
Clock output (PLL
master/no sharing)
This AFI interface clock can be full-rate or half-rate memory
clock frequency based on the memory interface
parameterization. When the interface is in PLL master or no
sharing modes, this interface is a clock output.
Clock input (PLL slave)
This AFI interface clock can be full-rate or half-rate memory
clock frequency based on the memory interface
parameterization. When the interface is in PLL slave mode,
global_reset interface
global_reset_n
soft_reset interface
soft_reset_n
afi_reset interface
afi_reset_n
afi_reset_export interface
afi_reset_export_n
afi_reset_in interface
afi_reset_n
afi_clk interface
afi_clk
afi_clk_in interface
afi_clk
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
212
7 Implementing and Parameterizing Memory IP
Interface Name
Interface Type
Description
you must connect this afi_clk input to the afi_clk
output of an identically configured memory interface in PLL
master mode.
afi_half_clk interface
afi_half_clk
Clock output (PLL
master/no sharing)
The AFI half clock that is half the frequency of the
afi_clk.When the interface is in PLL master or no sharing
modes, this interface is a clock output.
afi_half_clk_in interface
afi_half_clk
Clock input (PLL slave)
The AFI half clock that is half the frequency of the
afi_clk.When the interface is in PLL slave mode, you
must connect this afi_half_clk input to the
afi_half_clk output of an identically configured memory
interface in PLL master mode.
memory interface
mem_a
Conduit
Interface signals between the PHY and the memory device.
Avalom-MM Slave
Avalon-MM interface between memory interface and user
logic.
Conduit
Memory interface status signals.
mem_ba
mem_ck
mem_ck_n
mem_cs_n
mem_dk
mem_dk_n
mem_dm
mem_dq
mem_qk
mem_qk_n
mem_ref_n
mem_we_n
avl interface
avl_size
avl_wdata
avl_rdata_valid
avl_rdata
avl_ready
avl_write_req
avl_read_req
avl_addr
status interface
local_init_done
local_cal_success
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
213
7 Implementing and Parameterizing Memory IP
Interface Name
Interface Type
Description
local_cal_fail
oct interface
rup (Stratix III/IV, Arria II GZ)
Conduit
OCT reference resistor pins for rup/rdn or rzqin.
Conduit
Interface signals for PLL sharing, to connect PLL masters to
PLL slaves. This interface is enabled only when you set PLL
sharing mode to master or slave.
Conduit
DLL sharing interface for connecting DLL masters to DLL
slaves. This interface is enabled only when you set DLL
sharing mode to master or slave.
Conduit
OCT sharing interface for connecting OCT masters to OCT
slaves. This interface is enabled only when you set OCT
sharing mode to master or slave.
rdn (Stratix III/IV, Arria II GZ)
rzq (Stratix V)
pll_sharing interface
pll_mem_clk
pll_write_clk
pll_addr_cmd_clk
pll_locked
pll_avl_clk
pll_config_clk
pll_hr_clk
pll_p2c_read_clk
pll_c2p_write_clk
pll_dr_clk
dll_sharing interface
dll_delayctrl
oct_sharing interface
seriesterminationcontrol
parallelterminationcontrol
parity_error_interrupt interface
parity_error
Conduit
Parity error interrupt conduit for connection to custom
control block. This interface is enabled only if you turn on
Enable Error Detection Parity.
Conduit
User refresh interface for connection to custom control
block. This interface is enabled only if you turn on Enable
User Refresh.
Conduit
Reserved interface required for certain pin configurations
®
when you select the Nios II-based sequencer.
user_refresh interface
ref_req
ref_ba
ref_ack
reserved interface
reserved
Note to Table:
1. Signals available only in DLL master mode.
7.3.1.6 RLDRAM 3 UniPHY Interface
The following table lists the RLDRAM 3 signals available for each interface in Qsys and
provides a description and guidance on how to connect those interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
214
7 Implementing and Parameterizing Memory IP
Table 61.
RLDRAM 3 UniPHY Interface
Signals in Interface
Interface Type
Description/How to Connect
pll_ref_clk interface
pll_ref_clk
Clock input
PLL reference clock input.
Reset input
Asynchronous global reset for PLL and all logic in PHY.
Reset input
Asynchronous reset input. Resets the PHY, but not the PLL that
the PHY uses.
Reset output (PLL
master/no sharing)
When the interface is in PLL master or no sharing modes, this
interface is an asynchronous reset output of the AFI interface.
The controller asserts this interface when the PLL loses lock or
the PHY is reset.
global_reset interface
global_reset_n
soft_reset interface
soft_reset_n
afi_reset interface
afi_reset_n
afi_reset_export interface
afi_reset_export_n
Reset output (PLL
master/no sharing)
This interface is a copy of the afi_reset interface. It is intended
to be connected to PLL sharing slaves.
Reset input (PLL slave)
When the interface is in PLL slave mode, this interface is a reset
input that you must connect to the afi_reset_export_n
output of an identically configured memory interface in PLL
master mode.
Clock output (PLL
master/no sharing)
This AFI interface clock can be a full-rate or half-rate memory
clock frequency based on the memory interface
parameterization. When the interface is in PLL master or no
sharing modes, this interface is a clock output.
Clock input (PLL slave)
This AFI interface clock can be a full-rate or half-rate memory
clock frequency based on the memory interface
parameterization. When the interface is in PLL slave mode, you
must connect this afi_clk input to the afi_clk output of an
identically configured memory interface in PLL master mode.
Clock output (PLL
master/no sharing)
The AFI half clock that is half the frequency of afi_clk. When the
interface is in PLL master or no sharing modes, this interface is
a clock output.
Clock input (PLL slave)
The AFI half clock that is half the frequency of afi_clk. When the
interface is in PLL slave mode, this is a clock input that you
must connect to the afi_half_clk output of an identically
configured memory interface in PLL master mode.
Conduit
Interface signals between the PHY and the memory device.
afi_reset_in interface
afi_reset_n
afi_clk interface
afi_clk
afi_clk_in interface
afi_clk
afi_half_clk interface
afi_half_clk
afi_half_clk_in interface
afi_half_clk
memory interface
mem_a
mem_ba
mem_ck
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
215
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
mem_ck_n
mem_cs_n
mem_dk
mem_dk_n
mem_dm
mem_dq
mem_qk
mem_qk_n
mem_ref_n
mem_we_n
mem_reset_n
afi interface
afi_addr
Avalon-MM Slave
Altera PHY interface (AFI) signals between the PHY and
controller.
Conduit
OCT reference resistor pins for rzqin.
Conduit
Interface signals for PLL sharing, to connect PLL masters to PLL
slaves. This interface is enabled only when you set PLL sharing
mode to master or slave.
afi_ba
afi_cs_n
afi_we_n
afi_ref_n
afi_wdata_valid
afi_wdata
afi_dm
afi_rdata
afi_rdata_en
afi_rdata_en_full
afi_rdata_valid
afi_rst_n
afi_cal_success
afi_cal_fail
afi_wlat
afi_rlat
oct interface
oct_rzqin
pll_sharing interface
pll_mem_clk
pll_write_clk
pll_addr_cmd_clk
pll_locked
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
216
7 Implementing and Parameterizing Memory IP
Signals in Interface
Interface Type
Description/How to Connect
pll_avl_clk
pll_config_clk
pll_mem_phy_clk
afi_phy_clk
pll_write_clk_pre_phy_clk
pll_p2c_read_clk
pll_c2p_write_clk
dll_sharing interface
dll_delayctrl
Conduit
DLL sharing interface for connecting DLL masters to DLL slaves.
This interface is enabled only when you set DLL sharing mode to
master or slave.
Conduit
OCT sharing interface for connecting OCT masters to OCT
slaves. This interface is enabled only when you set OCT sharing
mode to master or slave.
dll_pll_locked
oct_sharing interface
seriesterminationcontrol
parallelterminationcontrol
7.3.2 Generated Files for Memory Controllers with the UniPHY IP
When you complete the IP generation flow, there are generated files created in your
project directory. The directory structure created varies somewhat, depending on the
tool used to parameterize and generate the IP.
Note:
The PLL parameters are statically defined in the <variation_name>_parameters.tcl
at generation time. To ensure timing constraints and timing reports are correct, when
you edit the PLL parameters, apply those changes to the PLL parameters in this file.
The following table lists the generated directory structure and key files created with
the IP Catalog and Qsys.
Table 62.
Generated Directory Structure and Key Files for the IP Catalog Synthesis Files
Directory
File Name
Description
<working_dir>/
<variation_name>.qip
Quartus Prime IP file which refers to all
generated files in the synthesis fileset.
Include this file in your Quartus Prime
project.
<working_dir>/
<variation_name>.v or
<variation_name>.vhd
Top-level wrapper synthesis files.
.v is IEEE Encrypted Verilog.
.vhd is generated VHDL.
<working_dir>/<variation_name>/
<variation_name>_0002.v
UniPHY top-level wrapper.
<working_dir>/<variation_name>/
*.v, *.sv, *.tcl, *.sdc, *.ppf
RTL and constraints files for synthesis.
<working_dir>/<variation_name>/
<variation_name>_p0_pin_assignm
ents.tcl
Pin constraints script to be run after
synthesis.
External Memory Interface Handbook Volume 2: Design Guidelines
217
7 Implementing and Parameterizing Memory IP
Table 63.
Generated Directory Structure and Key Files for the IP Catalog Simulation
Files
Directory
File Name
Description
<working_dir>/
<variation_name>_sim/
<variation_name>.v
Top-level wrapper simulation files for
both Verilog and VHDL.
<working_dir>/
<variation_name>_sim/
<subcomponent_module>/
*.v, *.sv, *.vhd, *.vho,*hex, *.mif
RTL and constraints files for simulation.
.v and .sv files are IEEE Encrypted
Verilog.
.vhd and .vho are generated VHDL.
Table 64.
Generated Directory Structure and Key Files for the IP Catalog—Example
Design Fileset Synthesis Files
Directory
File Name
Description
<variation_name>_example_design
/example_project/
<variation_name>_example.qip
Quartus Prime IP file that refers to all
generated files in the synthesizable
project.
<variation_name>_example_design
/example_project/
<variation_name>_example.qpf
Quartus Prime project for synthesis
flow.
<variation_name>_example_design
/example_project/
<variation_name>_example.qsf
Quartus Prime project for synthesis
flow.
<variation_name>_example_design
/example_project/
<variation_name>_example/
<variation_name>_example.v
Top-level wrapper.
<variation_name>_example_design
/example_project/
<variation_name>_example/
submodules/
*.v, *.sv, *.tcl, *.sdc, *.ppf
RTL and constraints files.
<variation_name>_example_design
/example_project/
<variation_name>_example/
submodules/
<variation_name>_example_if0_p0_
pin_assignments.tcl
Pin constraints script to be run after
synthesis.
_if0 and _p0 are instance names.
Table 65.
Generated Directory Structure and Key Files for the IP Catalog—Example
Design Fileset Simulation Files
Directory
File Name
Description
<variation_name>_example_design
/simulation/
generate_sim_verilog_example_de
sign.tcl
Run this file to generate the Verilog
simulation example design.
<variation_name>_example_design
/simulation/
generate_sim_vhdl_example_desi
gn.tcl
Run this file to generate the VHDL
simulation example design.
<variation_name>_example_design
/simulation/
README.txt
A text file with instructions about how
to generate and run the simulation
example design.
<variation_name>_example_design
/simulation/verilog/mentor
run.do
ModelSim* script to simulate the
generated Verilog example design.
<variation_name>_example_design
/simulation/vhdl/mentor
run.do
ModelSim script to simulate the
generated VHDL example design.
<variation_name>_example_design
/simulation/verilog/
<variation_name>_sim/
<variation_name>_example_ sim.v
Top-level wrapper (Testbench) for
Verilog.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
218
7 Implementing and Parameterizing Memory IP
Directory
File Name
Description
<variation_name>_example_design
/simulation/vhdl/
<variation_name>_sim/
<variation_name>_example_
sim.vhd
Top-level wrapper (Testbench) for
VHDL.
<variation_name>_example_design
/simulation/
<variation_name>_sim/verilog/
submodules/
*.v, *.sv, *.hex, *.mif
RTL and ROM data for Verilog.
<variation_name>_example_design
/simulation/
<variation_name>_sim/vhdl/
submodules/
*.vhd, *.vho, *.hex, *.mif
RTL and ROM data for VHDL.
Table 66.
Generated Directory Structure and Key Files for Qsys
Directory
File Name
Description
<working_dir>/<system_name>/
synthesis/
<system_name>.qip
Quartus Prime IP file that refers to all
the generated files in the synthesis
fileset.
<working_dir>/<system_name>/
synthesis/
<system_name>.v
System top-level RTL for synthesis.
<working_dir>/<system_name>/
simulation/
<system_name>.v or
<variation_name>.vhd
System top-level RTL for simulation.
.v file is IEEE Encrypted Verilog.
.vhd file is generated VHDL.
<working_dir>/<system_name>/
synthesis/ submodules/
*.v, *.sv, *.tcl, *.sdc, *.ppf
RTL and constraints files for synthesis.
<working_dir>/<system_name>/
simulation/ submodules/
*.v, *.sv, *.hex, *.mif
RTL and ROM data for simulation.
The following table lists the prefixes or instance names of submodule files within the
memory interface IP. These instances are concatenated to form unique synthesis and
simulation filenames.
Table 67.
Prefixes of Submodule Files
Prefixes
Description
_c0
Specifies the controller.
_d0
Specifies the driver or traffic generator.
_dll0
Specifies the DLL.
_e0
Specifies the example design.
_if0
Specifies the memory Interface.
_m0
Specifies the AFI mux.
_oct0
Specifies the OCT.
_p0
Specifies the PHY.
_pll0
Specifies the PLL.
_s0
Specifies the sequencer.
_t0
Specifies the traffic generator status checker.
External Memory Interface Handbook Volume 2: Design Guidelines
219
7 Implementing and Parameterizing Memory IP
7.3.3 Parameterizing Memory Controllers
This section describes the parameters you can set for various UniPHY-based memory
controllers.
Parameterizing Memory Controllers with UniPHY IP
The Parameter Settings page in the parameter editor allows you to parameterize the
following settings for the LPDDR2, DDR2, DDR3 SDRAM, QDR II, QDR II+ SRAM,
RLDRAM II, and RLDRAM 3 memory controllers with the UniPHY IP:
•
PHY Settings
•
Memory Parameters
•
Memory Timing
•
Board Settings
•
Controller Settings
•
Diagnostics
The messages window at the bottom of the parameter editor displays information
about the memory interface, warnings, and errors if you are trying to create
something that is not supported.
Enabling the Hard Memory Interface
For Arria V and Cyclone V devices, enable the hard memory interface by turning on
Interface Type ➤ Enable Hard Memory Interface in the parameter editor. The
hard memory interface uses the hard memory controller and hard memory PHY blocks
in the Arria V and Cyclone V devices.
The half-rate bridge option is available only as an SOPC Builder component, AvalonMM DDR Memory Half-Rate Bridge, for use in a Qsys project.
7.3.3.1 PHY Settings for UniPHY IP
The following table lists the PHY parameters for UniPHY-based EMIF IP.
Table 68.
PHY Parameters
Parameter
Description
General Settings
Speed Grade
Specifies the speed grade of the targeted FPGA device that affects the generated
timing constraints and timing reporting.
Generate PHY only
Turn on this option to generate the UniPHY core without a memory controller.
When you turn on this option, the AFI interface is exported so that you can easily
connect your own memory controller.
Not applicable to RLDRAM 3 UniPHY as no controller support for RLDRAM 3
UniPHY.
Clocks
Memory clock frequency
The frequency of the clock that drives the memory device. Use up to 4 decimal
places of precision.
To obtain the maximum supported frequency for your target memory
configuration, refer to the External Memory Interface Spec Estimator page on
www.altera.com.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
220
7 Implementing and Parameterizing Memory IP
Parameter
Description
Achieved memory clock frequency
The actual frequency the PLL generates to drive the external memory interface
(memory clock).
PLL reference clock frequency
The frequency of the input clock that feeds the PLL. Use up to 4 decimal places of
precision.
Rate on Avalon-MM interface
The width of data bus on the Avalon-MM interface. Full results in a width of 2×
the memory data width. Half results in a width of 4× the memory data width.
Quarter results in a width of 8× the memory data width. Use Quarter for
memory frequency 533 MHz and above.
To determine the Avalon-MM interface rate selection for other memories, refer to
the local interface clock rate for your target device in the External Memory
Interface Spec Estimator page on www.altera.com.
Note: MAX 10 devices support only half-rate Avalon-MM interface.
Achieved local clock frequency
The actual frequency the PLL generates to drive the local interface for the memory
controller (AFI clock).
Enable AFI half rate clock
Export the afi_half_rate clock which is running half of the AFI clock rate to the top
level.
Advanced PHY Settings
Advanced clock phase control
Enables access to clock phases. Default value should suffice for most DIMMs and
board layouts, but can be modified if necessary to compensate for larger address
and command versus clock skews.
This option is available for DDR, DDR2 and DDR3 SDRAM only.
Note: This parameter is not available for MAX 10 devices.
Additional address and command
clock phase
Allows you to increase or decrease the amount of phase shift on the address and
command clock. The base phase shift center aligns the address and command
clock at the memory device, which may not be the optimal setting under all
circumstances. Increasing or decreasing the amount of phase shift can improve
timing. The default value is 0 degrees.
In DDR, DDR2, DDR3 SDRAM, and LPDDR2 SDRAM, you can set this value from
-360 to 360 degrees. In QDRII/II+ SRAM and RLDRAM II, the available settings
are -45, -22.5, 22.5, and 45.
To achieve the optimum setting, adjust the value based on the address and
command timing analysis results.
Note: This parameter is not available for MAX 10 devices.
Additional phase for core-toperiphery transfer
Allows you to phase shift the latching clock of the core-to-periphery transfers. By
delaying the latch clock, a positive phase shift value improves setup timing for
transfers between registers in the core and the half-rate DDIO_OUT blocks in the
periphery, respectively. Adjust this setting according to the core timing analysis.
The default value is 0 degrees. You can set this value from -179 to 179 degrees.
Note: This parameter is not available for MAX 10 devices.
Additional CK/CK# phase
Allows you to increase or decrease the amount of phase shift on the CK/CK#
clock. The base phase shift center aligns the address and command clock at the
memory device, which may not be the optimal setting under all circumstances.
Increasing or decreasing the amount of phase shift can improve timing. Increasing
or decreasing the phase shift on CK/CK# also impacts the read, write, and
leveling transfers, which increasing or decreasing the phase shift on the address
and command clocks does not.
To achieve the optimum setting, adjust the value based on the address and
command timing analysis results. Ensure that the read, write, and write leveling
timings are met after adjusting the clock phase. Adjust this value when there is a
core timing failure after adjusting Additional address and command clock
phase.
The default value is 0 degrees. You can set this value from -360 to 360 degrees.
This option is available for LPDDR2, DDR, DDR2, and DDR3 SDRAM only.
Note: This parameter is not available for MAX 10 devices.
Supply voltage
The supply voltage and sub-family type of memory.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
221
7 Implementing and Parameterizing Memory IP
Parameter
Description
This option is available for DDR3 SDRAM only.
I/O standard
The I/O standard voltage. Set the I/O standard according to your design’s
memory standard.
PLL sharing mode
When you select No sharing, the parameter editor instantiates a PLL block
without exporting the PLL signals. When you select Master, the parameter editor
instantiates a PLL block and exports the signals. When you select Slave, the
parameter editor exposes a PLL interface and you must connect an external PLL
master to drive the PLL slave interface signals.
Select No sharing if you are not sharing PLLs, otherwise select Master or Slave.
For more information about resource sharing, refer to “The DLL and PLL Sharing
Interface” section in the Functional Description—UniPHY chapter of the External
Memory Interface Handbook.
Note: This parameter is not available for MAX 10 devices.
Number of PLL sharing interfaces
This option allows you to specify the number of PLL sharing interfaces to create,
facilitating creation of many one-to-one connections in Qsys flow. In Megawizard,
you can select one sharing interface and manually connect the master to all the
slaves.
This option is enabled when you set PLL sharing mode to Master.
Note: This parameter is not available for MAX 10 devices.
DLL sharing mode
When you select No sharing, the parameter editor instantiates a DLL block
without exporting the DLL signals. When you select Master, the parameter editor
instantiates a DLL block and exports the signals. When you select Slave, the
parameter editor exposes a DLL interface and you must connect an external DLL
master to drive the DLL slave signals.
Select No sharing if you are not sharing DLLs, otherwise select Master or Slave.
For more information about resource sharing, refer to “The DLL and PLL Sharing
Interface” section in the Functional Description—UniPHY chapter of the External
Memory Interface Handbook.
Note: This parameter is not available for MAX 10 devices.
Number of DLL sharing interfaces
This option allows you to specify the number of DLL sharing interfaces to create,
facilitating creation of many one-to-one connections in Qsys flow. In Megawizard,
you can select one sharing interface and manually connect the master to all the
slaves.
This option is enabled when you set PLL sharing mode to Master.
Note: This parameter is not available for MAX 10 devices.
OCT sharing mode
When you select No sharing, the parameter editor instantiates an OCT block
without exporting the OCT signals. When you select Master, the parameter editor
instantiates an OCT block and exports the signals. When you select Slave, the
parameter editor exposes an OCT interface and you must connect an external OCT
control block to drive the OCT slave signals.
Select No sharing if you are not sharing OCT blocks, otherwise select Master or
Slave.
For more information about resource sharing, refer to “The OCT Sharing Interface”
section in the Functional Description—UniPHY chapter of the External Memory
Interface Handbook.
Note: This parameter is not available for MAX 10 devices.
Number of OCT sharing interfaces
This option allows you to specify the number of OCT sharing interfaces to create,
facilitating creation of many one-to-one connections in Qsys flow. In Megawizard,
you can select one sharing interface and manually connect the master to all the
slaves.
This option is enabled when you set PLL sharing mode to Master.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
222
7 Implementing and Parameterizing Memory IP
Parameter
Description
Note: This parameter is not available for MAX 10 devices.
Reconfigurable PLL location
When you set the PLL used in the UniPHY memory interface to be reconfigurable
at run time, you must specify the location of the PLL. This assignment generates a
PLL that can only be placed in the given sides.
Sequencer optimization
Select Performance to enable the Nios II-based sequencer, or Area to enable the
RTL-based sequencer.
Intel recommends that you enable the Nios-based sequencer for memory clock
frequencies greater than 400 MHz and enable the RTL-based sequencer if you
want to reduce resource utilization.
This option is available for QDRII and QDR II+ SRAM, and RLDRAM II only.
Note: This parameter is not available for MAX 10 devices.
Related Links
•
External Memory Interface Spec Estimator
•
Functional Description–UniPHY
7.3.3.2 Memory Parameters for LPDDR2, DDR2 and DDR3 SDRAM for UniPHY IP
The following table lists the memory parameters for LPDDR2, DDR2 and DDR3
SDRAM.
Use the Memory Parameters tab to apply the memory parameters from your
memory manufacturer’s data sheet.
Table 69.
Memory Parameters for LPDDR2, DDR2, and DDR3 SDRAM
Parameter
Description
Memory vendor
The vendor of the memory device.
Select the memory vendor according to
the memory vendor you use. For
memory vendors that are not listed in
the setting, select JEDEC with the
nearest memory parameters and edit
the parameter values according to the
values of the memory vendor that you
use. However, if you select a
configuration from the list of memory
presets, the default memory vendor for
that preset setting is automatically
selected.
Memory format
The format of the memory device.
Select Discrete if you are using just
the memory device. Select Unbuffered
or Registered for DIMM format. Use
the DIMM format to turn on levelling
circuitry for LPDDR2 support device
only. DDR2 supports DIMM also.
Number of clock enables per device/DIMM
The number of clock enable pins per
device or DIMM. This value also
determines the number of ODT signals.
(This parameter is available only when
the selected memory format is
Registered.)
Note: This parameter is not available
for MAX 10 devices.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
223
7 Implementing and Parameterizing Memory IP
Parameter
Number of chip selects per device/DIMM
Description
The number of chip selects per device
or DIMM. This value is not necessarily
the same as the number of ranks for
RDIMMs or LRDIMMs. This value must
be 2 or greater for RDIMMs or
LRDIMMs. (This parameter is available
only when the selected memory format
is Registered.)
Note: This parameter is not available
for MAX 10 devices.
Number of ranks per slot
The number of ranks per DIMM slot.
(This parameter is available only when
the selected memory format is
Registered.)
Note: This parameter is not available
for MAX 10 devices.
Number of slots
The number of DIMM slots. (This
parameter is available only when the
selected memory format is
Registered.)
Note: This parameter is not available
for MAX 10 devices.
Memory device speed grade
The maximum frequency at which the
memory device can run.
Total interface width
The total number of DQ pins of the
memory device. Limited to 144 bits for
DDR2 and DDR3 SDRAM (with or
without leveling).
The total interface is depending on the
rate on Avalon-MM interface because
the maximum Avalon data width is
1024. If you select 144 bit for total
interface width with Quarter-rate, the
avalon data width is 1152 exceeding
maximum avalon data width.
DQ/DQS group size
The number of DQ bits per DQS group.
Number of DQS groups
The number of DQS groups is
calculated automatically from the Total
interface width and the DQ/DQS group
size parameters.
Number of chip selects (DDR2 and DDR3 SDRAM device only)
The number of chip-selects the IP core
uses for the current device
configuration.
Specify the total number of chip-selects
according to the number of memory
device.
Number of clocks
The width of the clock bus on the
memory interface.
Row address width
The width of the row address on the
memory interface.
Column address width
The width of the column address on the
memory interface.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
224
7 Implementing and Parameterizing Memory IP
Parameter
Description
Bank-address width
The width of the bank address bus on
the memory interface.
Enable DM pins
Specifies whether the DM pins of the
memory device are driven by the FPGA.
You can turn off this option to avoid
overusing FPGA device pins when using
x4 mode memory devices.
When you are using x4 mode memory
devices, turn off this option for DDR3
SDRAM.
You must turn on this option if you are
using Avalon byte enable.
DQS# Enable (DDR2)
Turn on differential DQS signaling to
improve signal integrity and system
performance.
This option is available for DDR2
SDRAM only.
7.3.3.2.1 Memory Initialization Options for DDR2
Memory Initialization Options—DDR2
Address and command parity
Enables address/command parity
checking. This is required for
Registered DIMM.
Mode Register 0
Burst length
Specifies the burst length.
Read burst type
Specifies accesses within a given burst
in sequential or interleaved order.
Specify sequential ordering for use
with the Intel memory controller.
Specify interleaved ordering only for
use with an interleaved-capable
custom controller, when the Generate
PHY only parameter is enabled on the
PHY Settings tab.
DLL precharge power down
Determines whether the DLL in the
memory device is in slow exit mode or
in fast exit mode during precharge
power down. For more information,
refer to memory vendor data sheet.
Memory CAS latency setting
Determines the number of clock cycles
between the READ command and the
availability of the first bit of output
data at the memory device. For more
information, refer to memory vendor
data sheet speed bin table.
Set this parameter according to the
target memory speed grade and
memory clock frequency.
Output drive strength setting
Determines the output driver
impedance setting at the memory
device.
To obtain the optimum signal integrity
performance, select the optimum
setting based on the board simulation
results.
Mode Register 1
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
225
7 Implementing and Parameterizing Memory IP
Memory Initialization Options—DDR2
Mode Register 2
Memory additive CAS latency setting
Determines the posted CAS additive
latency of the memory device.
Enable this feature to improve
command and bus efficiency, and
increase system bandwidth. For more
information, refer to the Optimizing
the Controller chapter.
Memory on-die termination (ODT) setting
Determines the on-die termination
resistance at the memory device.
To obtain the optimum signal integrity
performance, select the optimum
setting based on the board simulation
results.
SRT Enable
Determines the selfrefresh
temperature (SRT). Select 1x refresh
rate for normal temperature (0-85C)or
select 2x refresh rate for
high-temperature (>85C).
7.3.3.2.2 Memory Initialization Options for DDR3
Memory Initialization Options—DDR3
Mirror Addressing: 1 per chip select
Specifies the mirror addressing for multiple rank
DIMMs. Refer to memory vendor data sheet for
more information. Enter ranks with mirrored
addresses in this field. For example, for four chip
selects, enter 1101 to mirror the address on chip
select #3, #2, and #0.
Address and command parity
Enables address/command parity checking to
detect errors in data transmission. This is required
for registered DIMM (RDIMM).
Mode Register 0
Read burst type
Specifies accesses within a given burst in
sequential or interleaved order.
Specify sequential ordering for use with the Intel
memory controller. Specify interleaved ordering
only for use with an interleaved-capable custom
controller, when the Generate PHY only
parameter is enabled on the PHY Settings tab.
DLL precharge power down
Specifies whether the DLL in the memory device is
off or on during precharge power-down.
Memory CAS latency setting
The number of clock cycles between the read
command and the availability of the first bit of
output data at the memory device and also
interface frequency. Refer to memory vendor data
sheet speed bin table.
Set this parameter according to the target memory
speed grade and memory clock frequency.
Output drive strength setting
The output driver impedance setting at the
memory device.
To obtain the optimum signal integrity
performance, select the optimum setting based on
the board simulation results.
Memory additive CAS latency
setting
The posted CAS additive latency of the memory
device.
Mode Register 1
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
226
7 Implementing and Parameterizing Memory IP
Memory Initialization Options—DDR3
Enable this feature to improve command and bus
efficiency, and increase system bandwidth. For
more information, refer to the Optimizing the
Controller chapter.
Mode Register 2
ODT Rtt nominal value
The on-die termination resistance at the memory
device.
To obtain the optimum signal integrity
performance, select the optimum setting based on
the board simulation results.
Auto selfrefresh method
Disable or enable auto selfrefresh.
Selfrefresh temperature
Specifies the selfrefresh temperature as Normal or
Extended.
Memory write CAS latency setting
The number of clock cycles from the releasing of
the internal write to the latching of the first data
in, at the memory device and also interface
frequency. Refer to memory vendor data sheet
speed bin table and set according to the target
memory speed grade and memory clock frequency.
Dynamic ODT (Rtt_WR) value
The mode of the dynamic ODT feature of the
memory device. This is used for multi-rank
configurations. Refer to DDR2 and DDR3 SDRAM
Board Layout Guidelines.
To obtain the optimum signal integrity
performance, select the optimum setting based on
the board simulation results.
DDR3 RDIMM/LRDIMM Control
Words
The memory device features a set of control words
of SSTE32882 registers. These 4-bit control words
of serial presence-detect (SPD) information allow
the controller to optimize device properties to
match different DIMM net topologies.
You can obtain the control words from the memory
manufacturer's data sheet. You enter each word in
hexadecimal, starting with RC15 on the left and
ending with RC0 on the right.
Note: This parameter is not available for MAX 10
devices.
LRDIMM Additional Control Words
The memory device features a set of control words
of SSTE32882 registers. These 4-bit control words
of serial presence-detect (SPD) information allow
the controller to optimize device properties to
match different DIMM net topologies.
You can obtain the control words from the memory
manufacturer's data sheet. You enter each word in
hexadecimal, starting with SPD (77-72) or
SPD(83-78) on the left and ending with
SPD(71-69) on the right.
Note: This parameter is not available for MAX 10
devices.
7.3.3.2.3 Memory Initialization Options for LPDDR2
External Memory Interface Handbook Volume 2: Design Guidelines
227
7 Implementing and Parameterizing Memory IP
Memory Initialization Options—LPDDR2
Mode Register 1
Burst Length
Specifies the burst length.
Read Burst Type
Specifies accesses within a given burst
in sequential or interleaved order.
Specify sequential ordering for use
with the Intel memory controller.
Specify interleaved ordering only for
use with an interleaved-capable
custom controller, when the Generate
PHY only parameter is enabled on the
PHY Settings tab.
Mode Register 2
Memory CAS latency setting
Determines the number of clock cycles
between the READ command and the
availability of the first bit of output
data at the memory device.
Set this parameter according to the
target memory interface frequency.
Refer to memory data sheet and also
target memory speed grade.
Mode Register 3
Output drive strength settings
Determines the output driver
impedance setting at the memory
device.
To obtain the optimum signal integrity
performance, select the optimum
setting based on the board simulation
results.
7.3.3.3 Memory Parameters for QDR II and QDR II+ SRAM for UniPHY IP
The following table describes the memory parameters for QDR II and QDR II+ SRAM
for UniPHY IP.
Use the Memory Parameters tab to apply the memory parameters from your
memory manufacturer’s data sheet.
Table 70.
Memory Parameters for QDR II and QDR II+ SRAM
Parameter
Description
Address width
The width of the address bus on the memory device.
Data width
The width of the data bus on the memory device.
Data-mask width
The width of the data-mask on the memory device,
CQ width
The width of the CQ (read strobe) bus on the memory device.
K width
The width of the K (write strobe) bus on the memory device.
Burst length
The burst length supported by the memory device. For more information, refer to memory
vendor data sheet.
Topology
x36 emulated mode
Emulates a larger memory-width interface using smaller memory-width interfaces on the FPGA.
Turn on this option when the target FPGA do not support x36 DQ/DQS group. This option
allows two x18 DQ/DQS groups to emulate 1 x36 read data group.
Emulated write groups
Number of write groups to use to form the x36 memory interface on the FPGA. Select 2 to use
2 x18 DQ/DQS group to form x36 write data group. Select 4 to use 4 x9 DQ/DQS group to
form x36 write data group.
Device width
Specifies the number of memory devices used for width expansion.
External Memory Interface Handbook Volume 2: Design Guidelines
228
7 Implementing and Parameterizing Memory IP
7.3.3.4 Memory Parameters for RLDRAM II for UniPHY IP
The following table describes the memory parameters for RLDRAM II.
Use the Memory Parameters tab to apply the memory parameters from your
memory manufacturer’s data sheet.
Table 71.
Memory Parameters for RLDRAM II
Parameter
Description
Address width
The width of the address bus on the memory device.
Data width
The width of the data bus on the memory device.
Bank-address width
The width of the bank-address bus on the memory device.
Data-mask width
The width of the data-mask on the memory device,
QK width
The width of the QK (read strobe) bus on the memory device.
Select 1 when data width is set to 9. Select 2 when data width is set to 18 or 36.
DK width
The width of the DK (write strobe) bus on the memory device.
Select 1 when data width is set to 9 or 18. Select 2 when data width is set to 36.
Burst length
The burst length supported by the memory device. For more information, refer to memory
vendor data sheet.
Memory mode register
configuration
Configuration bits that set the memory mode. Select the option according to the interface
frequency.
Device impedance
Select External (ZQ) to adjust the driver impedance using the external impedance resistor
(RQ). The output impedance range is 25-60. You must connect the RQ resistor between ZQ
pin and ground. The value of RQ must be 5 times the output impedance. For example, 60
output impedance requires 300 RQ.
Set the value according to the board simulation.
On-Die Termination
Turn on this option to enable ODT in the memory to terminate the DQs and DM pins to Vtt.
Dynamically switch off during read operation and switch on during write operation. Refer to
memory vendor data sheet for more information.
Topology
Device width
Specifies the number of memory devices used for width expansion.
7.3.3.5 Memory Timing Parameters for DDR2, DDR3, and LPDDR2 SDRAM for
UniPHY IP
The following table lists the memory timing parameters for DDR2, DDR3, and LPDDR2
SDRAM.
Use the Memory Timing tab to apply the memory timings from your memory
manufacturer’s data sheet.
Table 72.
Parameter Description
Parameter
Protocol
Description
tIS (base)
DDR2, DDR3, LPDDR2
Address and control setup to CK clock rise. Set
according to the memory speed grade and refer to
the memory vendor data sheet.
tIH (base)
DDR2, DDR3, LPDDR2
Address and control hold after CK clock rise. Set
according to the memory speed grade and refer to
the memory vendor data sheet.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
229
7 Implementing and Parameterizing Memory IP
Parameter
Protocol
Description
tDS (base)
DDR2, DDR3, LPDDR2
Data setup to clock (DQS) rise. Set according to the
memory speed grade and refer to the memory
vendor data sheet.
tDH (base)
DDR2, DDR3, LPDDR2
Data hold after clock (DQS) rise. Set according to
the memory speed grade and refer to the memory
vendor data sheet.
tDQSQ
DDR2, DDR3, LPDDR2
DQS, DQS# to DQ skew, per access. Set according
to the memory speed grade and refer to the
memory vendor data sheet.
tQHS
DDR2, LPDDR2
DQ output hold time from DQS, DQS# (absolute
time value)
tQH
DDR3
DQ output hold time from DQS, DQS# (percentage
of tCK). Set according to the memory speed grade
and refer to the memory vendor data sheet.
tDQSCK
DDR2, DDR3, LPDDR2
DQS output access time from CK/CK#. Set
according to the memory speed grade and refer to
the memory vendor data sheet.
tDQSCK Delta Short
LPDDR2
Absolute difference between any two tDQSCK
measurements (within a byte lane) within a
contiguous sequence of bursts in a 160ns rolling
window. Set according to the memory speed grade
and refer to the memory vendor data sheet.
tDQSCK Delta Medium
LPDDR2
Absolute difference between any two tDQSCK
measurements (within a byte lane) within a
contiguous sequence of bursts in a 1.6us rolling
window. Set according to the memory speed grade
and refer to the memory vendor data sheet.
tDQSCK Delta Long
LPDDR2
Absolute difference between any two tDQSCK
measurements (within a byte lane) within a
contiguous sequence of bursts in a 32ms rolling
window. Set according to the memory speed grade
and refer to the memory vendor data sheet.
tDQSS
DDR2, DDR3, LPDDR2
First latching edge of DQS to associated clock edge
(percentage of tCK). Set according to the memory
speed grade and refer to the memory vendor data
sheet.
tDQSH
DDR2, LPDDR2
tQSH
DDR3
DQS Differential High Pulse Width (percentage of
tCK). Specifies the minimum high time of the DQS
signal received by the memory. Set according to the
memory speed grade and refer to the memory
vendor data sheet.
tDSH
DDR2, DDR3, LPDDR2
DQS falling edge hold time from CK (percentage of
tCK). Set according to the memory speed grade
and refer to the memory vendor data sheet.
tDSS
DDR2, DDR3, LPDDR2
DQS falling edge to CK setup time (percentage of
tCK). Set according to the memory speed grade
and refer to the memory vendor data sheet.
tINIT
DDR2, DDR3, LPDDR2
Memory initialization time at power-up. Set
according to the memory speed grade and refer to
the memory vendor data sheet.
tMRD
DDR2, DDR3
tMRW
LPDDR2
Load mode register command period. Set according
to the memory speed grade and refer to the
memory vendor data sheet.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
230
7 Implementing and Parameterizing Memory IP
Parameter
Protocol
Description
tRAS
DDR2, DDR3, LPDDR2
Active to precharge time. Set according to the
memory speed grade and refer to the memory
vendor data sheet.
tRCD
DDR2, DDR3, LPDDR2
Active to read or write time. Set according to the
memory speed grade and refer to the memory
vendor data sheet.
tRP
DDR2, DDR3, LPDDR2
Precharge command period. Set according to the
memory speed grade and refer to the memory
vendor data sheet.
tREFI
DDR2, DDR3
tREFICab
LPDDR2
Refresh command interval. (All banks only for
LPDDR2.) Set according to the memory speed
grade and temperature range. Refer to the memory
vendor data sheet.
tRFC
DDR2, DDR3
tRFCab
LPDDR2
tWR
DDR2, DDR3, LPDDR2
Write recovery time. Set according to the memory
speed grade and refer to the memory vendor data
sheet.
tWTR
DDR2, DDR3, LPDDR2
Write to read period. Set according to the memory
speed grade and memory clock frequency. Refer to
the memory vendor data sheet. Calculate the value
based on the memory clock frequency.
tFAW
DDR2, DDR3, LPDDR2
Four active window time. Set according to the
memory speed grade and page size. Refer to the
memory vendor data sheet.
tRRD
DDR2, DDR3, LPDDR2
RAS to RAS delay time. Set according to the
memory speed grade, page size and memory clock
frequency. Refer to the memory vendor data sheet.
Calculate the value based on the memory interface
frequency and memory clock frequency.
tRTP
DDR2, DDR3, LPDDR2
Read to precharge time. Set according to memory
speed grade. Refer to the memory vendor data
sheet. Calculate the value based on the memory
interface frequency and memory clock frequency.
Auto-refresh command interval. (All banks only for
LPDDR2.) Set according to the memory device
capacity. Refer to the memory vendor data sheet.
7.3.3.6 Memory Timing Parameters for QDR II and QDR II+ SRAM for UniPHY IP
The following table lists the memory timing parameters for QDR II and QDR II+
SRAM.
Use the Memory Timing tab to apply the memory timings from your memory
manufacturer’s data sheet.
Table 73.
Parameter Description
Parameter
Description
QDR II and QDR II+ SRAM
tWL (cycles)
The write latency. Set write latency 0 for burst length of 2, and set write latency to 1 for
burst length of 4.
tRL (cycles)
The read latency. Set according to memory protocol. Refer to memory data sheet.
tSA
The address and control setup to K clock rise. Set according to memory protocol. Refer to
memory data sheet.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
231
7 Implementing and Parameterizing Memory IP
Parameter
Description
tHA
The address and control hold after K clock rise. Set according to memory protocol. Refer to
memory data sheet.
tSD
The data setup to clock (K/K#) rise. Set according to memory protocol. Refer to memory
data sheet.
tHD
The data hold after clock (K/K#) rise. Set according to memory protocol. Refer to memory
data sheet.
tCQD
Echo clock high to data valid. Set according to memory protocol. Refer to memory data
sheet.
tCQDOH
Echo clock high to data invalid. Set according to memory protocol. Refer to memory data
sheet.
Internal jitter
The QDRII/II+ internal jitter. Refer to memory data sheet.
TCQHCQnH
The CQ clock rise to CQn clock rise (rising edge to rising edge). Set according to memory
speed grade. Refer to memory data sheet.
TKHKnH
The K clock rise to Kn clock rise (rising edge to rising edge). Set according to memory speed
grade. Refer to memory data sheet.
7.3.3.7 Memory Timing Parameters for RLDRAM II for UniPHY IP
The following table lists the memory timing parameters for RLDRAM II.
Use the Memory Timing tab to apply the memory timings from your memory
manufacturer’s data sheet.
Table 74.
Memory Timing Parameters
Parameter
Description
RLDRAM II
Maximum memory clock
frequency
The maximum frequency at which the memory device can run. Set according to memory
speed grade. Refer to memory data sheet.
Refresh interval
The refresh interval. Set according to memory speed grade. Refer to memory data sheet.
tCKH (%)
The input clock (CK/CK#) high expressed as a percentage of the full clock period. Set
according to memory speed grade. Refer to memory data sheet.
tQKH (%)
The read clock (QK/QK#) high expressed as a percentage of tCKH. Set according to memory
speed grade. Refer to memory data sheet.
tAS
Address and control setup to CK clock rise. Set according to memory speed grade. Refer to
memory data sheet.
tAH
Address and control hold after CK clock rise. Set according to memory speed grade. Refer to
memory data sheet.
tDS
Data setup to clock (CK/CK#) rise. Set according to memory speed grade. Refer to memory
data sheet.
tDH
Data hold after clock (CK/CK#) rise. Set according to memory speed grade. Refer to
memory data sheet.
tQKQ_max
QK clock edge to DQ data edge (in same group). Set according to memory speed grade.
Refer to memory data sheet.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
232
7 Implementing and Parameterizing Memory IP
Parameter
Description
tQKQ_min
QK clock edge to DQ data edge (in same group). Set according to memory speed grade.
Refer to memory data sheet.
tCKDK_max
Clock to input data clock (max). Set according to memory speed grade. Refer to memory
data sheet.
tCKDK_min
Clock to input data clock (min). Set according to memory speed grade. Refer to memory
data sheet.
7.3.3.8 Memory Parameters for RLDRAM 3 for UniPHY IP
The following tables list the memory parameters for RLDRAM 3 for UniPHY IP.
Use the Memory Timing tab to apply the memory timings from your memory
manufacturer’s data sheet.
Table 75.
Memory Parameters for RLDRAM 3 for UniPHY
Parameter
Description
Enable data-mask pins
Specifies whether the DM pins of the memory device are driven by the FPGA.
Data-mask width
The width of the data-mask on the memory device.
Data width
The width of the data bus on the memory device.
QK width
The width of the QK (read strobe) bus on the memory device.
Select 2 when data width is set to 18. Select 4 when data width is set to 36.
DK width
The width of the DK (write strobe) bus on the memory device. For x36 device, DQ[8:0] and
DQ[26:18] are referenced to DK0/DK0#, and DQ[17:9] and DQ[35:27] are referenced to
DK1/DK1#.
Address width
The width of the address bus on the memory device.
Bank-address width
The width of the bank-address bus on the memory device.
Burst length
The burst length supported by the memory device. Refer to memory vendor data sheet.
tRC
Mode register bits that set the tRC. Set the tRC according to the memory speed grade and
data latency. Refer to the tRC table in the memory vendor data sheet.
Data Letency
Mode register bits that set the latency. Set latency according to the interface frequency and
memory speed grade. Refer to speed bin table in the memory data sheet.
Output Drive
Mode register bits that set the output drive impedance setting. Set the value according to
the board simulation.
ODT
Mode register bits that set the ODT setting. Set the value according to the board simulation.
AREF Protocol
Mode register setting for refreshing memory content of a bank. Select Multibank to allow
refresh 4 bank simultaneously. Select Bank Address Control to refresh a particular bank by
setting the bank address.
Write Protocol
Mode register setting for write protocol. When multiple bank (dual bank or quad bank) is
selected, identical data is written to multiple banks.
Topology
Device width
Specifies the number of memory devices used for width
expansion.
External Memory Interface Handbook Volume 2: Design Guidelines
233
7 Implementing and Parameterizing Memory IP
Table 76.
Memory Timing Parameters for RLDRAM 3 for UniPHY
Parameter
Description
Memory Device Timing
Maximum memory clock
frequency
The maximum frequency at which the memory device can run.
tDS (base)
Base specification for data setup to DK/DK#. Set according to memory speed grade.
Refer to memory data sheet.
tDH (base)
Base specification for data hold from DK/DK#. Set according to memory speed grade.
Refer to memory data sheet.
tQKQ_max
QK/QK# clock edge to DQ data edge (in same group). Set according to memory speed
grade. Refer to memory data sheet.
tQH (% of CK)
DQ output hold time from QK/QK#. Set according to memory speed grade. Refer to
memory data sheet.
tCKDK_max(% of CK)
Clock to input data clock (max). Set according to memory speed grade. Refer to
memory data sheet.
tCKDK_min (% of CK)
Clock to input data clock (min). Set according to memory speed grade. Refer to
memory data sheet.
tCKQK_max
QK edge to clock edge skew (max). Set according to memory speed grade. Refer to
memory data sheet.
tIS (base)
Base specification for address and control setup to CK.Set according to memory speed
grade. Refer to memory data sheet.
tIH (base)
Base specification for address and control hold from CK. Set according to memory
speed grade. Refer to memory data sheet.
Controller Timing
Read-to-Write NOP commands
(min)
Minimum number of no operation commands following a read command and before a
write command. The value must be at least ((Burst Length/2) + RL - WL + 2). The
value, along with other delay/skew parameters, are used by the "Bus Turnaround"
timing analysis to determine if bus contention is an issue.
Set according to the controller specification.
Write-to-Read NOP commands
(min)
Minimum number of no operation commands following a write command and before a
read command. The value must be at least ((Burst Length/2) + WL - RL + 1). The
value, along with other delay/skew parameters, are used by the "Bus Turnaround"
timing analysis to determine if bus contention is an issue.
Set according to the controller specification.
RLDRAM 3 Board Derate
CK/CK# slew rate
(differential)
CK/CK# slew rate (differential).
Address/Command slew rate
Address and command slew rate.
DK/DK# slew rate
(Differential)
DK/DK# slew rate (differential).
DQ slew rate
DQ slew rate.
tIS
Address/command setup time to CK.
7.3.4 Board Settings
Use the Board Settings tab to model the board-level effects in the timing analysis.
External Memory Interface Handbook Volume 2: Design Guidelines
234
7 Implementing and Parameterizing Memory IP
The Board Settings tab allows you to specify the following settings:
Note:
•
Setup and hold derating (For LPDDR2/DDR2/DDR3 SDRAM, RLDRAM 3 and
RLDRAM II for UniPHY IP)
•
Channel Signal Integrity
•
Board skews (For UniPHY IP.)
For accurate timing results, you must enter board settings parameters that are correct
for your PCB.
The IP core supports single and multiple chip-select configurations. Intel has
determined the effects on the output signaling of single-rank configurations for certain
Intel boards, and included the channel uncertainties in the Quartus Prime timing
models.
Because the Quartus Prime timing models hold channel uncertainties that are
representative of specific Intel boards, you must determine the board-level effects of
your board, including any additional channel uncertainty relative to Intel's reference
design, and enter those values into the Board Settings panel in the parameter editor.
You can use HyperLynx or a similar simulator to obtain values that are representative
of your board.
For more information about how to include your board simulation results in the
Quartus Prime software, refer to the following sections. For more information about
how to assign pins using pin planners, refer to the design flow tutorials and design
examples on the List of Designs Using Intel External Memory IP page of
www.alterawiki.com.
For more general information about timing deration methodology, refer to the Timing
Deration Methodology for Multiple Chip Select DDR2 and DDR3 SDRAM Designs section
in the Analyzing Timing of Memory IP chapter.
Related Links
•
Analyzing Timing of Memory IP
•
List of Designs using Intel External Memory IP
7.3.4.1 Setup and Hold Derating for UniPHY IP
The slew rate of the output signals affects the setup and hold times of the memory
device, and thus the write margin. You can specify the slew rate of the output signals
to see their effect on the setup and hold times of both the address and command
signals and the DQ signals, or alternatively, you may want to specify the setup and
hold times directly.
For RDIMMs, the slew rate is defined at the register on the RDIMM, instead of at the
memory component. For LRDIMMs, the slew rate is defined at the buffer on the
LRDIMM, instead of at the memory component.
Note:
You should enter information derived during your PCB development process of
prelayout (line) and postlayout (board) simulation.
The following table lists the setup and hold derating parameters.
External Memory Interface Handbook Volume 2: Design Guidelines
235
7 Implementing and Parameterizing Memory IP
Table 77.
Setup and Hold Derating Parameters
Parameter
Description
LPDDR2/DDR2/DDR3 SDRAM/RLDRAM 3
Derating method
Derating method. The default settings are based on Intel internal board simulation data.
To obtain accurate timing analysis according to the condition of your board, Intel
recommends that you perform board simulation and enter the slew rate in the Quartus
Prime software to calculate the derated setup and hold time automatically or enter the
derated setup and hold time directly.
For more information, refer to the “Timing Deration Methodology for Multiple Chip Select
DDR2 and DDR3 SDRAM Designs” section in the Analyzing Timing of Memory IP chapter.
CK/CK# slew rate
(differential)
CK/CK# slew rate (differential).
Address/Command slew
rate
Address and command slew rate.
DQS/DQS# slew rate
(Differential)
DQS and DQS# slew rate (differential).
DQ slew rate
DQ slew rate.
tIS
Address/command setup time to CK.
tIH
Address/command hold time from CK.
tDS
Data setup time to DQS.
tDH
Data hold time from DQS.
RLDRAM II
tAS Vref to CK/CK#
Crossing
For a given address/command and CK/CK# slew rate, the memory device data sheet
provides a corresponding "tAS Vref to CK/CK# Crossing" value that can be used to
determine the derated address/command setup time.
tAS VIH MIN to CK/CK#
Crossing
For a given address/command and CK/CK# slew rate, the memory device data sheet
provides a corresponding "tAS VIH MIN to CK/CK# Crossing" value that can be used to
determine the derated address/command setup time.
tAH CK/CK# Crossing to
Vref
For a given address/command and CK/CK# slew rate, the memory device data sheet
provides a corresponding "tAH CK/CK# Crossing to Vref" value that can be used to
determine the derated address/command hold time.
tAH CK/CK# Crossing to
VIH MIN
For a given address/command and CK/CK# slew rate, the memory device data sheet
provides a corresponding "tAH CK/CK# Crossing to VIH MIN" value that can be used to
determine the derated address/command hold time.
tDS Vref to CK/CK#
Crossing
For a given data and DK/DK# slew rate, the memory device data sheet provides a
corresponding "tDS Vref to CK/CK# Crossing" value that can be used to determine the
derated data setup time.
tDS VIH MIN to CK/CK#
Crossing
For a given data and DK/DK# slew rate, the memory device data sheet provides a
corresponding "tDS VIH MIN to CK/CK# Crossing" value that can be used to determine the
derated data setup time.
tDH CK/CK# Crossing to
Vref
For a given data and DK/DK# slew rate, the memory device data sheet provides a
corresponding "tDH CK/CK# Crossing to Vref" value that can be used to determine the
derated data hold time.
tDH CK/CK# Crossing to
VIH MIN
For a given data and DK/DK# slew rate, the memory device data sheet provides a
corresponding "tDH CK/CK# Crossing to VIH MIN" value that can be used to determine the
derated data hold time.
Derated tAS
The derated address/command setup time is calculated automatically from the "tAS", the
"tAS Vref to CK/CK# Crossing", and the "tAS VIH MIN to CK/CK# Crossing" parameters.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
236
7 Implementing and Parameterizing Memory IP
Parameter
Description
Derated tAH
The derated address/command hold time is calculated automatically from the "tAH", the
"tAH CK/CK# Crossing to Vref", and the "tAH CK/CK# Crossing to VIH MIN" parameters.
Derated tDS
The derated data setup time is calculated automatically from the "tDS", the "tDS Vref to
CK/CK# Crossing", and the "tDS VIH MIN to CK/CK# Crossing" parameters.
Derated tDH
The derated data hold time is calculated automatically from the "tDH", the "tDH CK/CK#
Crossing to Vref", and the "tDH CK/CK# Crossing to VIH MIN" parameters.
7.3.4.2 Intersymbol Interference Channel Signal Integrity for UniPHY IP
Channel signal integrity is a measure of the distortion of the eye due to intersymbol
interference or crosstalk or other effects.
Typically, when going from a single-rank configuration to a multi-rank configuration
there is an increase in the channel loss, because there are multiple stubs causing
reflections. Although the Quartus Prime timing models include some channel
uncertainty, you must perform your own channel signal integrity simulations and enter
the additional channel uncertainty, relative to the reference eye, into the parameter
editor GUI.
For details about measuring channel loss parameters and entering channel signal
integrity information into the parameter editor GUI, refer to the Wiki: http://
www.alterawiki.com/wiki/Measuring_Channel_Signal_Integrity.
The following table lists intersymbol interference parameters.
Table 78.
ISI Parameters
Parameter
Description
Derating method
Choose between default Intel settings (with specific Intel boards) or manually
enter board simulation numbers obtained for your specific board.
This option is supported in LPDDR2/DDR2/DDR3 SDRAM only.
Address and command eye
reduction (setup)
The reduction in the eye diagram on the setup side (or left side of the eye) due to
ISI on the address and command signals compared to a case when there is no
ISI. (For single rank designs, ISI can be zero; in multirank designs, ISI is
necessary for accurate timing analysis.)
For more information about how to measure the ISI value for the address and
command signals, refer to the “Measuring Eye Reduction for Address/Command,
DQ, and DQS Setup and Hold Time” section in Analyzing Timing of Memory IP .
Address and command eye
reduction (hold)
The reduction in the eye diagram on the hold side (or right side of the eye) due to
ISI on the address and command signals compared to a case when there is no
ISI.
For more information about how to measure the ISI value for the address and
command signals, refer to “Measuring Eye Reduction for Address/Command, DQ,
and DQS Setup and Hold Time” section in Analyzing Timing of Memory IP.
DQ/ D eye reduction
The total reduction in the eye diagram due to ISI on DQ signals compared to a
case when there is no ISI. Intel assumes that the ISI reduces the eye width
symmetrically on the left and right side of the eye.
For more information about how to measure the ISI value for the address and
command signals, refer to “Measuring Eye Reduction for Address/Command, DQ,
and DQS Setup and Hold Time” section in Analyzing Timing of Memory IP .
Delta DQS/Delta K/ Delta DK
arrival time
The increase in variation on the range of arrival times of DQS compared to a case
when there is no ISI. Intel assumes that the ISI causes DQS to further vary
symmetrically to the left and to the right.
For more information about how to measure the ISI value for the address and
command signals, refer to “Measuring Eye Reduction for Address/Command, DQ,
and DQS Setup and Hold Time” section in Analyzing Timing of Memory IP .
External Memory Interface Handbook Volume 2: Design Guidelines
237
7 Implementing and Parameterizing Memory IP
7.3.4.3 Board Skews for UniPHY IP
PCB traces can have skews between them that can reduce timing margins.
Furthermore, skews between different chip selects can further reduce the timing
margin in multiple chip-select topologies.
The Board Skews section of the parameter editor allows you to enter parameters to
compensate for these variations.
Note:
You must ensure the timing margin reported in TimeQuest Report DDR is positive
when the board skew parameters are correct for the PCB.
The following tables list the board skew parameters. For parameter equations
containing delay values, delays should be measured as follows:
•
•
Non-fly-by topology (Balanced Tree)
—
For discrete devices–all the delay (CK, Addr/Cmd, DQ and DQS) from the
FPGA are right to every memory device
—
For UDIMMs–all the delay (CK, Addr/Cmd, DQ and DQS) from the FPGA to
UDIMM connector for every memory device on the UDIMM. If UDIMM delay
information is available, calculate delays to every memory device on the
UDIMM.
—
For RDIMMs–the Addr/Cmd and CK delay are from the FPGA to the register on
the RDIMM. The DQ and DQS delay are from FPGA to RDIMM connector for
every memory device on the RDIMM.
—
For LRDIMMS–the delay from the FPGA to the register on the LRDIMM.
Fly-by topology
—
For discrete devices–the Addr/Cmd and CK delay are from the FPGA to the
first memory device. The DQ and DQS delay are from FPGA to every memory
device.
—
For UDIMMs–the Addr/Cmd and CK delay are from the FPGA to the UDIMM
connector. The DQ and DQS delay are from the FPGA to UDIMM connector for
every memory device on the UDIMM.
—
For RDIMMs–the Addr/Cmd and CK delay are from the FPGA to the register on
the RDIMM. The DQ and DQS delay are from FPGA to RDIMM connector for
every memory device on the RDIMM.
—
For LRDIMMS–the delay from the FPGA to the buffer on the LRDIMM.
Equations apply to any given memory device, except when marked by the board or
group qualifiers (_b or _g), where they apply to the particular device or group being
iterated over.
Use the Board Skew Parameter Tool to help you calculate the board skews.
Related Links
Board Skew Parameter Tool
7.3.4.3.1 Board Skew Parameters for LPDDR2/DDR2/DDR3 SDRAM
The following table lists board skew parameters for LPDDR2, DDR2, and DDR3
interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
238
7 Implementing and Parameterizing Memory IP
Table 79.
Parameter Descriptions
Parameter
Description
FPGA DQ/DQS Package
Skews Deskewed on
Board
Enable this parameter if you will deskew the FPGA package with your board traces on the DQ and DQS pins. This opt
increases the read capture and write margins. Enable this option when memory clock frequency is larger than 800 M
Enabling this option improves the read capture and write timing margin. You can also rely on the read capture and w
timing margin in timing report to enable this option.
When this option is enabled, package skews are output on the DQ and DQS pins in the Pin-Out File (.pin) and packag
skew is not included in timing analysis. All of the other board delay and skew parameters related to DQ or DQS must
consider the package and the board together. For more information, refer to DDR2 and DDR3 Board Layout Guideline
Address/Command
Package Deskew
Enable this parameter if you will deskew the FPGA package with your board traces on the address and command pins
This option increases the address and command margins. Enable this option when memory clock frequency is larger
800 MHz.Enabling this option will improve the address and command timing margin. You can also rely on the address
command margin in timing report to enable this option.
When this option is enabled, package skews are output on the address and command pins in the Pin-Out File (.pin) a
package skew is not included in timing analysis. All of the other board delay and skew parameters related to Address
Command must consider the package and the board together. For more information, refer to DDR2 and DDR3 Board
Layout Guidelines.
Maximum CK delay to
DIMM/device
The delay of the longest CK trace from the FPGA to the memory device, whether on a DIMM or the same PCB as the
is expressed by the following equation:
Where n is the number of memory clock and r is number rank of DIMM/device. For example in dual-rank DIMM
implementation, if there are 2 pairs of memory clocks in each rank DIMM, the maximum CK delay is expressed by th
following equation:
Maximum DQS delay to
DIMM/device
The delay of the longest DQS trace from the FPGA to the memory device, whether on a DIMM or the same PCB as th
FPGA is expressed by the following equation:
Where n is the number of DQS and r is number of rank of DIMM/device. For example in dual-rank DIMM implementat
if there are 2 DQS in each rank DIMM, the maximum DQS delay is expressed by the following equation:
Minimum delay
difference between CK
and DQS
The minimum skew or smallest positive skew (or largest negative skew) between the CK signal and any DQS signal w
arriving at the same DIMM/device over all DIMMs/devices is expressed by the following equation:
Where n is the number of memory clock, m is the number of DQS, and r is the number of rank of DIMM/device. For
example in dual-rank DIMM implementation, if there are 2 pairs of memory clock and 4 DQS signals (two for each clo
for each rank DIMM, the minimum delay difference between CK and DQS is expressed by the following equation:
This parameter value affects the write leveling margin for DDR3 interfaces with leveling in multi-rank configurations.
parameter value also applies to non-leveling configurations of any number of ranks with the requirement that DQS m
have positive margins in TimeQuest Report DDR.
continue
External Memory Interface Handbook Volume 2: Design Guidelines
239
7 Implementing and Parameterizing Memory IP
Parameter
Description
For multiple boards, the minimum skew between the CK signal and any DQS signal when arriving at the same DIMM ov
all DIMMs is expressed by the following equation, if you want to use the same design for several different boards:
Note: If you are using a clamshell topology in a multirank/multi chip-select design with either DIMM or discrete devices
or using dual-die devices, the above calculations do not apply; you may use the default values in the GUI.
Maximum delay
difference between CK
and DQS
The maximum skew or smallest negative skew (or largest positive skew) between the CK signal and any DQS signal wh
arriving at the same DIMM/device over all DIMMs/devices is expressed by the following equation:
Where n is the number of memory clock, m is the number of DQS, and r is the number of rank of DIMM/device. For
example in dual-rank DIMM implementation, if there are 2 pairs of memory clock and 4 DQS signals (two for each clock
for each rank DIMM, the maximum delay difference between CK and DQS is expressed by the following equation:
This value affects the write Leveling margin for DDR3 interfaces with leveling in multi-rank configurations. This parame
value also applies to non-leveling configurations of any number of ranks with the requirement that DQS must have
positive margins in TimeQuest Report DDR.
For multiple boards, the maximum skew (or largest positive skew) between the CK signal and any DQS signal when
arriving at the same DIMM over all DIMMs is expressed by the following equation, if you want to use the same design f
several different boards:
Note: If you are using a clamshell topology in a multirank/multi chip-select design with either DIMM or discrete devices
or using dual-die devices, the above calculations do not apply; you may use the default values in the GUI.
Maximum skew within
DQS group
The largest skew among DQ and DM signals in a DQS group. This value affects the read capture and write margins for
DDR2 and DDR3 SDRAM interfaces in all configurations (single or multiple chip-select, DIMM or component).
For multiple boards, the largest skew between DQ and DM signals in a DQS group is expressed by the following equatio
Maximum skew
between DQS groups
The largest skew between DQS signals in different DQS groups. This value affects the resynchronization margin in
memory interfaces without leveling such as DDR2 SDRAM and discrete-device DDR3 SDRAM in both single- or multiple
chip-select configurations. For protocols or families that do not have read resynchronization analysis, this parameter ha
no effect.
For multiple boards, the largest skew between DQS signals in different DQS groups is expressed by the following
equation, if you want to use the same design for several different boards:
continued
External Memory Interface Handbook Volume 2: Design Guidelines
240
7 Implementing and Parameterizing Memory IP
Parameter
Average delay
difference between DQ
and DQS
Description
The average delay difference between each DQ signal and the DQS signal, calculated by averaging the longest and
smallest DQ signal delay values minus the delay of DQS. The average delay difference between DQ and DQS is expre
by the following equation:
where n is the number of DQS groups. For multi-rank or multiple CS configuration, the equation is:
Maximum skew within
address and command
bus
The largest skew between the address and command signals for a single board is expressed by the following equatio
For multiple boards, the largest skew between the address and command signals is expressed by the following equat
if you want to use the same design for several different boards:
Average delay
difference between
address and command
and CK
A value equal to the average of the longest and smallest address and command signal delay values, minus the delay
the CK signal. The value can be positive or negative. Positive values represent address and command signals that are
longer than CK signals; negative values represent address and command signals that are shorter than CK signals. Th
average delay difference between address and command and CK is expressed by the following equation:
where n is the number of memory clocks. For multi-rank or multiple CS configuration, the equation is:
The Quartus Prime software uses this skew to optimize the delay of the address and command signals to have
appropriate setup and hold margins for DDR2 and DDR3 SDRAM interfaces. You should derive this value through boa
simulation.
For multiple boards, the average delay difference between address and command and CK is expressed by the followin
equation, if you want to use the same design for several different boards:
7.3.4.3.2 Board Skew Parameters for QDR II and QDR II+
The following table lists board skew parameters for QDR II and QDR II+ interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
241
7 Implementing and Parameterizing Memory IP
Table 80.
Parameter Descriptions
Parameter
Maximum delay
difference between
devices
Description
The maximum delay difference of data signals between devices is expressed by the following
equation:
For example, in a two-device configuration there is greater propagation delay for data signals
going to and returning from the furthest device relative to the nearest device. This parameter
is applicable for depth expansion. Set the value to 0 for non-depth expansion design.
Maximum skew within
write data group (ie, K
group)
The maximum skew between D and BWS signals referenced by a common K signal.
Maximum skew within
read data group (ie, CQ
group)
The maximum skew between Q signals referenced by a common CQ signal.
Maximum skew
between CQ groups
The maximum skew between CQ signals of different read data groups. Set the value to 0 for
non-depth expansion designs.
Maximum skew within
address/command bus
The maximum skew between the address/command signals.
Average delay
difference between
address/command and
K
A value equal to the average of the longest and smallest address/command signal delay
values, minus the delay of the K signal. The value can be positive or negative.
The average delay difference between the address and command and K is expressed by the
following equation:
where n is the number of K clocks.
Average delay
difference between
write data signals and
K
A value equal to the average of the longest and smallest write data signal delay values, minus
the delay of the K signal. Write data signals include the D and BWS signals. The value can be
positive or negative.
The average delay difference between D and K is expressed by the following equation:
where n is the number of DQS groups.
Average delay
difference between
read data signals and
CQ
A value equal to the average of the longest and smallest read data signal delay values, minus
the delay of the CQ signal. The value can be positive or negative.
The average delay difference between Q and CQ is expressed by the following equation:
where n is the number of CQ groups.
7.3.4.3.3 Board Skew parameters for RLDRAM II and RLDRAM 3
The following table lists board skew parameters for RLDRAM II and RLDRAM 3
interfaces.
External Memory Interface Handbook Volume 2: Design Guidelines
242
7 Implementing and Parameterizing Memory IP
Table 81.
Parameter Descriptions
Parameter
Maximum CK delay to
device
Description
The delay of the longest CK trace from the FPGA to any device/DIMM is expressed by the
following equation:
where n is the number of memory clocks. For example, the maximum CK delay for two pairs
of memory clocks is expressed by the following equation:
Maximum DK delay to
device
The delay of the longest DK trace from the FPGA to any device/DIMM is expressed by the
following equation:
where n is the number of DK. For example, the maximum DK delay for two DK is expressed by
the following equation:
Minimum delay
difference between CK
and DK
The minimum delay difference between the CK signal and any DK signal when arriving at the
memory device(s). The value is equal to the minimum delay of the CK signal minus the
maximum delay of the DK signal. The value can be positive or negative.
The minimum delay difference between CK and DK is expressed by the following equations:
where n is the number of memory clocks and m is the number of DK. For example, the
minimum delay difference between CK and DK for two pairs of memory clocks and four DK
signals (two DK signals for each clock) is expressed by the following equation:
Maximum delay
difference between CK
and DK
The maximum delay difference between the CK signal and any DK signal when arriving at the
memory device(s). The value is equal to the maximum delay of the CK signal minus the
minimum delay of the DK signal. The value can be positive or negative.
The maximum delay difference between CK and DK is expressed by the following equations:
where n is the number of memory clocks and m is the number of DK. For example, the
maximum delay difference between CK and DK for two pairs of memory clocks and four DK
signals (two DK signals for each clock) is expressed by the following equation:
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
243
7 Implementing and Parameterizing Memory IP
Parameter
Maximum delay
difference between
devices
Description
The maximum delay difference of data signals between devices is expressed by the following
equation:
For example, in a two-device configuration there is greater propagation delay for data signals
going to and returning from the furthest device relative to the nearest device. This parameter
is applicable for depth expansion. Set the value to 0 for non-depth expansion design.
Maximum skew within
QK group
The maximum skew between the DQ signals referenced by a common QK signal.
Maximum skew
between QK groups
The maximum skew between QK signals of different data groups.
Maximum skew within
address/command bus
The maximum skew between the address/command signals.
Average delay
difference between
address/command and
CK
A value equal to the average of the longest and smallest address/command signal delay
values, minus the delay of the CK signal. The value can be positive or negative.
The average delay difference between the address and command and CK is expressed by the
following equation:
where n is the number of memory clocks.
Average delay
difference between
write data signals and
DK
A value equal to the average of the longest and smallest write data signal delay values, minus
the delay of the DK signal. Write data signals include the DQ and DM signals. The value can be
positive or negative.
The average delay difference between DQ and DK is expressed by the following equation:
where n is the number of DK groups.
Average delay
difference between
read data signals and
QK
A value equal to the average of the longest and smallest read data signal delay values, minus
the delay of the QK signal. The value can be positive or negative.
The average delay difference between DQ and QK is expressed by the following equation:
where n is the number of QK groups.
External Memory Interface Handbook Volume 2: Design Guidelines
244
7 Implementing and Parameterizing Memory IP
7.3.5 Controller Settings for UniPHY IP
Use the Controller Settings tab to apply the controller settings suitable for your
design.
Note:
This section describes parameters for the High Performance Controller II (HPC II) with
advanced features first introduced in version 11.0 for designs generated in version
11.0 or later. Designs created in earlier versions and regenerated in version 11.0 or
later do not inherit the new advanced features; for information on parameters for HPC
II without the advanced features, refer to the External Memory Interface Handbook for
Quartus II version 10.1, available on the Literature: External Memory Interfaces page
of www.altera.com.
Table 82.
Controller Settings for LPDDR2/DDR2/DDR3 SDRAM
Parameter
Avalon Interface
Low Power Mode
Description
Generate power-of-2 bus widths
for SOPC Builder
Rounds down the Avalon-MM side data
bus to the nearest power of 2. You
must enable this option for Qsys
systems.
is option is enabled, the Avalon data
buses are truncated to 256 bits wide.
One Avalon read-write transaction of
256 bit width maps to four memory
beat transactions, each of 72 bits (8
MSB bits are zero, while 64 LSB bits
carry useful content). The four memory
beats may comprise an entire burst
length-of-4 transaction, or part of a
burst-length-of-8 transaction.
Generate SOPC Builder compatible
resets
This option is not required when using
the IP Catalog or Qsys.
Maximum Avalon-MM burst length
Specifies the maximum burst length on
the Avalon-MM bus. Affects the
AVL_SIZE_WIDTH parameter.
Enable Avalon-MM byte-enable
signal
When you turn on this option, the
controller adds the byte enable signal
(avl_be) for the Avalon-MM bus to
control the data mask (mem_dm) pins
going to the memory interface. You
must also turn on Enable DM pins if
you are turning on this option.
When you turn off this option, the byte
enable signal (avl_be) is not enabled
for the Avalon-MM bus, and by default
all bytes are enabled. However, if you
turn on Enable DM pins with this
option turned off, all write words are
written.
Avalon interface address width
The address width on the Avalon-MM
interface.
Avalon interface data width
The data width on the Avalon-MM
interface.
Enable self-refresh controls
Enables the self-refresh signals on the
controller top-level design. These
controls allow you to control when the
memory is placed into self-refresh
mode.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
245
7 Implementing and Parameterizing Memory IP
Parameter
Efficiency
Description
Enable Deep Power-Down Controls
Enables the Deep Power-Down signals
on the controller top level. These
signals control when the memory is
placed in Deep Power-Down mode.
This parameter is available only for
LPDDR2 SDRAM.
Enable auto-power down
Allows the controller to automatically
place the memory into (Precharge)
power-down mode after a specified
number of idle cycles. Specifies the
number of idle cycles after which the
controller powers down the memory in
the auto-power down cycles parameter.
Auto power-down cycles
The number of idle controller clock
cycles after which the controller
automatically powers down the
memory. The legal range is from 1 to
65,535 controller clock cycles.
Enable user auto refresh controls
Enables the user auto-refresh control
signals on the controller top level.
These controller signals allow you to
control when the controller issues
memory autorefresh commands.
Enable auto precharge control
Enables the autoprecharge control on
the controller top level. Asserting the
autoprecharge control signal while
requesting a read or write burst allows
you to specify whether the controller
should close (autoprecharge) the
currently open page at the end of the
read or write burst.
Local-to-memory address mapping
Allows you to control the mapping
between the address bits on the
Avalon-MM interface and the chip, row,
bank, and column bits on the memory.
Select Chip-Row-Bank-Col to
improve efficiency with sequential
traffic.
Select Chip-Bank-Row-Col to
improve efficiency with random traffic.
Select Row-Chip-Bank-Col to
improve efficiency with multiple chip
select and sequential traffic.
Command queue look ahead depth
Selects a look-ahead depth value to
control how many read or writes
requests the look-ahead bank
management logic examines. Larger
numbers are likely to increase the
efficiency of the bank management,
but at the cost of higher resource
usage. Smaller values may be less
efficient, but also use fewer resources.
The valid range is from 1 to 16.
Enable reordering
Allows the controller to perform
command and data reordering that
reduces bus turnaround time and row/
bank switching time to improve
controller efficiency.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
246
7 Implementing and Parameterizing Memory IP
Parameter
Configuration, Status, and Error
Handling
Multiple Port Front End
Description
Starvation limit for each command
Specifies the number of commands
that can be served before a waiting
command is served. The valid range is
from 1 to 63.
Enable Configuration and Status
Register Interface
Enables run-time configuration and
status interface for the memory
controller. This option adds an
additional Avalon-MM slave port to the
memory controller top level, which you
can use to change or read out the
memory timing parameters, memory
address sizes, mode register settings
and controller status. If Error Detection
and Correction Logic is enabled, the
same slave port also allows you to
control and retrieve the status of this
logic.
CSR port host interface
Specifies the type of connection to the
CSR port. The port can be exported,
internally connected to a JTAG Avalon
Master, or both.
Select Internal (JTAG) to connect the
CSR port to a JTAG Avalon Master.
Select Avalon-MM Slave to export the
CSR port.
Select Shared to export and connect
the CSR port to a JTAG Avalon Master.
Enable error detection and
correction logic
Enables ECC for single-bit error
correction and double-bit error
detection. Your memory interface must
be a multiple of 16, 24, 40, or 72 bits
wide to use ECC.
Enable auto error correction
Allows the controller to perform auto
correction when a single-bit error is
detected by the ECC logic.
Export bonding port
Turn on this option to export bonding
interface for wider avalon data width
with two controllers. Bonding ports are
exported to the top level.
Number of ports
Specifies the number of Avalon-MM
Slave ports to be exported. The
number of ports depends on the width
and the type of port you selected.
There are four 64-bit read FIFOs and
four 64-bit write FIFOs in the multiport front-end (MPFE) component. For
example, If you select 256 bits width
and bidirectional slave port, all the
FIFOs are fully utilized, therefore you
can only select one port.
Note: This parameter is not available
for MAX 10 devices.
Width
Specifies the local data width for each
Avalon-MM Slave port. The width
depends on the type of slave port and
also the number of ports selected. This
is due to the limitation of the FIFO
counts in the MPFE. There are four 64bit read FIFOs and four 64-bit write
FIFOs in the MPFE. For example, if you
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
247
7 Implementing and Parameterizing Memory IP
Parameter
Description
select one bidirectional slave port, you
can select up to 256 bits to utilize all
the read and write FIFOs.
As a general guideline to choosing an
optimum port width for your half-rate
or quarter-rate design, apply the
following equation:
port width = 2 x DQ width x Interface
width multiplier
where the interface width multiplier is
2 for half-rate interfaces and 4 for
quarter-rate interfaces.
Table 83.
Priority
Specifies the absolute priority for each
Avalon-MM Slave port. Any transaction
from the port with higher priority
number will be served before
transactions from the port with lower
priority number.
Weight
Specifies the relative priority for each
Avalon-MM Slave port. When there are
two or more ports having the same
absolute priority, the transaction from
the port with higher (bigger number)
relative weight will be served first. You
can set the weight from a range of 0 to
32.
Type
Specifies the type of Avalon MM slave
port to either a bidirectional port, read
only port or write only port.
Controller Settings for QDR II/QDR II+ SRAM and RLDRAM II
Parameter
Description
Generate power-of-2 data bus widths for SOPC Builder
Rounds down the Avalon-MM side data
bus to the nearest power of 2. You
must enable this option for Qsys
systems.
Generate SOPC Builder compatible resets
This option is not required when using
the IP Catalog or Qsys.
Maximum Avalon-MM burst length
Specifies the maximum burst length on
the Avalon-MM bus.
Enable Avalon-MM byte-enable signal
When you turn on this option, the
controller adds a byte-enable signal
(avl_be_w) for the Avalon-MM bus, in
which controls the bws_n signal on the
memory side to mask bytes during
write operations.
When you turn off this option, the
avl_be_w signal is not available and
the controller will always drive the
memory bws_n signal so as to not
mask any bytes during write
operations.
Avalon interface address width
Specifies the address width on the
Avalon-MM interface.
Avalon interface data width
Specifies the data width on the AvalonMM interface.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
248
7 Implementing and Parameterizing Memory IP
Parameter
Description
Reduce controller latency by
Specifies the number of clock cycles by
which to reduce controller latency.
Lower controller latency results in
lower resource usage and fMAX while
higher latency results in higher
resource usage and fMAX,
Enable user refresh
Enables user-controlled refresh.
Refresh signals will have priority over
read/write requests.
This option is available for RLDRAM II
only.
Enable error detection parity
Enables per-byte parity protection.
This option is available for RLDRAM II
only
Related Links
Literature: External Memory Interfaces
7.3.6 Diagnostics for UniPHY IP
The Diagnostics tab allows you to set parameters for certain diagnostic functions.
The following table describes parameters for simulation.
Table 84.
Simulation Options
Parameter
Description
Simulation Options
Auto-calibration mode
Specifies whether you want to improve simulation performance by reducing
calibration. There is no change to the generated RTL. The following autocalibration
modes are available:
• Skip calibration—provides the fastest simulation. It loads the settings
calculated from the memory configuration and enters user mode.
• Quick calibration—calibrates (without centering) one bit per group before
entering user mode.
• Full calibration—calibrates the same as in hardware, and includes all phases,
delay sweeps, and centering on every data bit. You can use timing annotated
memory models. Be aware that full calibration can take hours or days to
complete.
To perform proper PHY simulation, select Quick calibration or Full calibration.
For more information, refer to the “Simulation Options” section in the Simulating
Memory IP chapter.
For QDR II, QDR II+ SRAM, and RLDRAM II, the Nios II-based sequencer must be
selected to enable the auto calibration modes selection.
Note: This parameter is not available for MAX 10 devices.
Skip memory initialization delays
When you turn on this option, required delays between specific memory
initialization commands are skipped to speed up simulation.
Note: This parameter is not available for MAX 10 devices.
Enable verbose memory model
output
Turn on this option to display more detailed information about each memory
access during simulation.
Enable support for Nios II
ModelSim flow in Eclipse
Initializes the memory interface for use with the Run as Nios II ModelSim flow
with Eclipse.
This parameter is not available for QDR II and QDR II+ SRAM.
Note: This parameter is not available for MAX 10 devices.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
249
7 Implementing and Parameterizing Memory IP
Parameter
Description
Note: This parameter is not available for MAX 10 devices.
Debug Options
Debug level
Specifies the debug level of the memory interface.
Efficiency Monitor and Protocol Checker Settings
Enable the Efficiency Monitor and
Protocol Checker on the
Controller Avalon Interface
Enables efficiency monitor and protocol checker block on the controller Avalon
interface.
This option is not available for QDR II and QDR II+ SRAM, or for the MAX 10
device family, or for Arria V or Cyclone V designs using the Hard Memory
Controller.
7.4 Intel Arria 10 External Memory Interface IP
This section contains information about parameterizing Intel Arria 10 External Memory
Interface IP.
7.4.1 Qsys Interfaces
The interfaces in the Arria 10 External Memory Interface IP each have signals that can
be connected in Qsys. The following tables list the signals available for each interface
and provide a description and guidance on how to connect those interfaces.
Listed interfaces and signals are available in all configurations unless stated otherwise
in the description column. For Arria 10 External Memory Interface for HPS, the
global_reset_reset_sink, pll_ref_clk_clock_sink,
hps_emif_conduit_end, oct_conduit_end and mem_conduit_end interfaces are
the only available interfaces regardless of your configuration.
Arria 10 External Memory Interface IP Interfaces
Table 85.
Interface: afi_clk_conduit_end
Interface type: Conduit
Signals in Interface
afi_clk
Direction
Output
Availability
•
•
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
External Memory Interface Handbook Volume 2: Design Guidelines
250
Description
The Altera PHY Interface (AFI)
clock output signal. The clock
frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the (clock
input) conduit of the custom AFIbased memory controller
connected to the
afi_conduit_end or any user
logic block that requires the
generated clock frequency.
7 Implementing and Parameterizing Memory IP
Table 86.
Interface: afi_conduit_end
Interface type: Conduit
Signals in Interface
Direction
Availability
afi_cal_success
Output
•
afi_cal_fail
Output
•
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
afi_cal_req
Input
afi_rlat
Output
afi_wlat
Output
afi_addr
Input
afi_rst_n
Input
afi_wdata_valid
Input
afi_wdata
Input
afi_rdata_en_full
Input
afi_rdata
Output
afi_rdata_valid
Output
afi_rrank
Input
afi_wrank
Input
afi_ba
Input
•
•
DDR3, DDR4, RLDRAM 3
Hard PHY only
afi_cs_n
Input
•
•
DDR3, DDR4, LPDDR3,
RLDRAM 3
Hard PHY only
•
•
DDR3, DDR4, LPDDR3
Hard PHY only
•
•
QDR IV
Hard PHY only
•
•
QDR IV
Hard PHY only
afi_cke
Input
afi_odt
Input
afi_dqs_burst
Input
afi_ap
Input
afi_pe_n
Output
afi_ainv
Input
afi_ld_n
Input
afi_rw_n
Input
afi_cfg_n
Input
afi_lbk0_n
Input
afi_lbk1_n
Input
afi_rdata_dinv
Output
afi_wdata_dinv
Input
Description
The Altera PHY Interface (AFI)
signals between the external
memory interface IP and the
custom AFI-based memory
controller.
Connect this interface to the AFI
conduit of the custom AFI-based
memory controller.
The Altera PHY Interface (AFI)
signals between the external
memory interface IP and the
custom AFI-based memory
controller.
Connect this interface to the AFI
conduit of the custom AFI-based
memory controller.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
251
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
afi_we_n
Input
•
•
DDR3, RLDRAM 3
Hard PHY only
afi_dm
Input
•
•
•
DDR3, LPDDR3, RLDRAM 3
Hard PHY only
Enable DM pins=True
afi_ras_n
Input
afi_cas_n
Input
•
•
DDR3
Hard PHY only
afi_rm
Input
•
•
•
DDR3
Hard PHY only
LRDIMM with Number of
rank multiplication pins >
0
afi_par
Input
•
•
•
•
•
•
DDR3
Hard PHY only
RDIMM/LRDIMM
DDR4
Hard PHY only
Enable alert_n/par pins =
True
afi_bg
Input
afi_act_n
Input
•
•
DDR4
Hard PHY only
afi_dm_n
Input
•
•
•
DDR4
Hard PHY only
Enable DM pins=True
afi_ref_n
Input
•
•
RLDRAM 3
Hard PHY only
Table 87.
Description
The Altera PHY Interface (AFI)
signals between the external
memory interface IP and the
custom AFI-based memory
controller.
Connect this interface to the AFI
conduit of the custom AFI-based
memory controller.
For more information, refer to the
AFI 4.0 Specification.
Interface: afi_half_clk_conduit_end
Interface type: Conduit
Signals in Interface
afi_half_clk
Direction
Output
Availability
•
•
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
Description
The Altera PHY Interface (AFI) half
clock output signal. The clock runs
at half the frequency of the AFI
clock (afi_clk clock).
Connect this interface to the clock
input conduit of the user logic
block that needs to be clocked at
the generated clock frequency.
Table 88.
Interface: afi_reset_conduit_end
Interface type: Conduit
Signals in Interface
afi_reset_n
Direction
Output
Availability
•
•
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
External Memory Interface Handbook Volume 2: Design Guidelines
252
Description
The Altera PHY Interface (AFI)
reset output signal. Asserted when
the PLL becomes unlocked or
when the PHY is reset.
Asynchronous assertion and
synchronous deassertion.
Connect this interface to the reset
input conduit of the custom AFIbased memory controller
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
Description
connected to the
afi_conduit_end and all the
user logic blocks that are under
the AFI clock domain afi_clk or
afi_half_clk clock).
Table 89.
Interface: cal_debug_avalon_slave
Interface type: Avalon Memory-Mapped Slave
Signals in Interface
cal_debug_waitrequest
Direction
Output
cal_debug_read
Input
cal_debug_write
Input
cal_debug_addr
Input
cal_debug_read_data
Output
cal_debug_write_data
Input
cal_debug_byteenable
Input
cal_debug_read_data_valid
Table 90.
Availability
•
•
EMIF Debug Toolkit
On-Chip Debug Port=Export
Output
Description
The Avalon-MM signals between
the external memory interface IP
and the external memory interface
Debug Component.
Connect this interface to the
(to_ioaux) Avalon-MM master of
the Arria 10 EMIF Debug
Component IP or to
(cal_debug_out_avalon_mast
er) Avalon-MM master of the
other external memory interface
IP that has exported the interface.
If you are not using the Altera
EMIF Debug Toolkit, connect this
interface to the Avalon-MM master
of the custom debug logic.
When in daisy-chaining mode,
ensure one of the connected
Avalon masters is either the Arria
10 EMIF Debug Component IP
or the external memory interface
IP with EMIF Debug Toolkit/OnChip Debug Port set to Add
EMIF Debug Interface.
Interface: cal_debug_clk_clock_sink
Interface type: Clock Input
Signals in Interface
cal_debug_clk
Direction
Input
•
Availability
Description
EMIF Debug Toolkit / OnChip Debug Port=Export
The calibration debug clock input
signal.
Connect this interface to the
(avl_clk_out) clock output of
the Arria 10 EMIF Debug
Component IP or to
(cal_debug_out_clk_clock_s
ource) clock input of the other
external memory interface IP,
depending on which IP the
cal_debug_avalon_slave
interface is connecting to. If you
are not using the Altera EMIF
Debug Toolkit, connect this
interface to the clock output of the
custom debug logic.
External Memory Interface Handbook Volume 2: Design Guidelines
253
7 Implementing and Parameterizing Memory IP
Table 91.
Interface: cal_debug_out_avalon_master
Interface type: Avalon Memory-Mapped Master
Signals in Interface
cal_debug_out_waitrequest
Direction
Availability
Input
•
cal_debug_out_read
Output
•
cal_debug_out_write
Output
cal_debug_out_addr
Output
cal_debug_out_read_data
Input
cal_debug_out_write_data
Output
cal_debug_out_byteenable
Output
cal_debug_out_read_data_valid
Table 92.
EMIF Debug Toolkit / OnChip Debug Port=Export
Add EMIF Debug Interface with
Enable Daisy-Chaining for
EMIF Debug Toolkit/ OnChip Debug Port=True
Description
The Avalon-MM signals between
the external memory interface IP
and the other external memory
interface IP.
Connect this interface to the
(cal_debug_avalon_slave)
Avalon-MM Master of the external
memory interface IP that has
exported the interface .
Input
Interface: cal_debug_out_clk_clock_source
Interface type: Clock Output
Signals in Interface
cal_debug_out_clk
Direction
Output
Availability
•
•
EMIF Debug Toolkit / OnChip Debug Port=Export
Add EMIF Debug Interface with
Enable Daisy-Chaining for
EMIF Debug Toolkit/ OnChip Debug Port=True
Description
The calibration debug clock output
signal.
For EMIF Debug Toolkit/OnChip Debug Port=Export with
Enable Daisy-Chaining for EMIF
Debug Toolkit/ On-Chip Debug
Port=True, the clock frequency
follows the cal_debug_clk
frequency. Otherwise, the clock
frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the
(cal_debug_out_reset_reset
_source) clock input of the other
external memory interface IP
where the
cal_debug_avalon_master
interface is being connected to or
to any user logic block that needs
to be clocked at the generated
clock frequency.
Table 93.
Interface: cal_debug_out_reset_reset_source
Interface type: Reset Output
Signals in Interface
cal_debug_out_reset_n
Direction
Output
Availability
•
•
EMIF Debug Toolkit / OnChip Debug Port=Export
Add EMIF Debug Interface with
Enable Daisy-Chaining for
EMIF Debug Toolkit/ OnChip Debug Port=True
External Memory Interface Handbook Volume 2: Design Guidelines
254
Description
The calibration debug reset output
signal. Asynchronous assertion
and synchronous deassertion.
Connect this interface to the
(cal_debug_reset_reset_sin
k) reset input of the other external
memory interface IP where the
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
Description
cal_debug_avalon_master
interface being connected to and
all the user logic blocks that are
under the calibration debug clock
domain (cal_debug_out_clk
clock reset). If you are not
using the Altera EMIF Debug
Toolkit, connect this interface to
the reset output of the custom
debug logic.
Table 94.
Interface: cal_debug_reset_reset_sink
Interface type: Reset Intput
Signals in Interface
cal_debug_reset_n
Direction
Input
•
Availability
Description
EMIF Debug Toolkit / OnChip Debug Port=Export
The calibration debug reset input
signal. Require asynchronous
assertion and synchronous
deassertion.
Connect this interface to the
(avl_rst_out) reset output of
the Arria 10 EMIF Debug
Component IP or to
(cal_debug_out_reset_reset
_source) clock input of the other
external memory interface IP,
depending on which IP the
cal_debug_avalon_slave
interface is being connected to.
Table 95.
Interface: clks_sharing_master_out_conduit_end
Interface type: Conduit
Signals in Interface
clks_sharing_master_out
Table 96.
Direction
Input
Availability
•
Core clocks sharing=Master
Description
The core clock output signals.
Connect this interface to the
(clks_sharing_slave_in_con
duit_end) conduit of the other
external memory interface IP with
the Core clock sharing set to slave
or other PLL Slave.
Interface: clks_sharing_slave_in_conduit_end
Interface type: Conduit
Signals in Interface
clks_sharing_slave_in
Direction
Input
•
Availability
Description
Core clocks sharing=Slave
The core clock input signals.
Connect this interface to the
(clks_sharing_master_out_c
onduit_end) conduit of the other
external memory interface IP with
the Core clock sharing set to
Master or other PLL Master.
External Memory Interface Handbook Volume 2: Design Guidelines
255
7 Implementing and Parameterizing Memory IP
Table 97.
Interface: ctrl_amm_avalon_slave
Interface type: Avalon Memory-Mapped Slave
Signals in Interface
Direction
Availability
amm_ready
Output
•
amm_read
Input
•
amm_write
Input
amm_address
Input
amm_readdata
Output
amm_writedata
Input
amm_burstcount
Input
amm_readdatavalid
Output
amm_byteenable
Input
•
•
Table 98.
DDR3, DDR4 with Hard PHY &
Hard Controller
QDR II/II+/II+ Xtreme, QDR
IV
DDR3, DDR4 with Hard PHY &
Hard Controller and Enable
DM pins=True
QDR II/II+/II+ Xtreme with
Enable BWS# pins=True
Description
The Avalon-MM signals between
the external memory interface IP
and the user logic.
Connect this interface to the
Avalon-MM Master of the user logic
that needs to access the external
memory device. For QDR II/II+/II
+ Xtreme, connect the
ctrl_amm_avalon_slave_0 to
the user logic for read request and
connect the
ctrl_amm_avalon_slave_1 to
the user logic for write request.
In Ping Pong PHY mode, each
interface controls only one
memory device. Connect
ctrl_amm_avalon_slave_0 to
the user logic that will access the
first memory device, and connect
ctrl_amm_avalon_slave_1 to
the user logic that will access the
secondary memory device.
Interface: ctrl_auto_precharge_conduit_end
Interface type: Conduit
Direction
Signals in Interface
ctrl_auto_precharge_req
Table 99.
Input
Availability
•
DDR3, DDR4 with Hard PHY &
Hard Controller and Enable
Auto-Precharge
Control=True
Description
The auto-precharge control input
signal. Asserting the
ctrl_auto_precharge_req
signal while issuing a read or write
burst instructs the external
memory interface IP to issue read
or write with auto-precharge to
the external memory device. This
precharges the row immediately
after the command currently
accessing it finishes, potentially
speeding up a future access to a
different row of the same bank.
Connect this interface to the
conduit of the user logic block that
controls when the external
memory interface IP needs to
issue read or write with autoprecharge to the external memory
device.
Interface: ctrl_ecc_user_interrupt_conduit_end
Interface type: Conduit
Direction
Signals in Interface
ctrl_ecc_user_interrupt
Output
External Memory Interface Handbook Volume 2: Design Guidelines
256
Availability
•
DDR3, DDR4 with Hard
PHY & Hard Controller
and Enable Error
Detection and
Correction Logic = True
Description
Controller ECC user interrupt
interface for connection to a
custom control block that
must be notified when ECC
errors occur.
7 Implementing and Parameterizing Memory IP
Table 100.
Interface: ctrl_mmr_avalon_slave
Interface type: Avalon Memory-Mapped Slave
Signals in Interface
mmr_waitrequest
Direction
Output
mmr_read
Input
mmr_write
Input
mmr_address
Input
mmr_readdata
Output
mmr_writedata
Input
mmr_burstcount
Input
mmr_byteenable
Input
mmr_beginbursttransfer
Input
mmr_readdatavalid
Table 101.
Availability
•
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
and Enable Memory-Mapped
Configuration and Status
Register (MMR)=True
Description
The Avalon-MM signals between
the external memory interface IP
and the user logic.
Connect this interface to the
Avalon-MM master of the user
logic that needs to access the
Memory-Mapped Configuration and
Status Register (MMR) in the
external memory interface IP.
Output
Interface: ctrl_power_down_conduit_end
Interface type: Conduit
Signals in Interface
ctrl_power_down_ack
Table 102.
Direction
Output
•
Availability
Description
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
and Enable Auto Power
Down=True
The auto power-down
acknowledgment signals. When
the ctrl_power_down_ack
signal is asserted, it indicates that
the external memory interface IP
is placing the external memory
device into power-down mode.
Connect this interface to the
conduit of the user logic block that
requires the auto power-down
status, or leave it unconnected.
Interface: ctrl_user_priority_conduit_end
Interface type: Conduit
Signals in Interface
ctrl_user_priority_hi
Direction
Input
•
•
Availability
Description
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
Avalon Memory-Mapped and
Enable Command Priority
Control=true
The command priority control
input signal. Asserting the
ctrl_user_priority_hi signal
while issuing a read or write
request instructs the external
memory interface to treat it as a
high-priority command. The
external memory interface
attempts to execute high-priority
commands sooner, to reduce
latency.
Connect this interface to the
conduit of the user logic block that
determines when the external
memory interface IP treats the
read or write request as a highpriority command.
External Memory Interface Handbook Volume 2: Design Guidelines
257
7 Implementing and Parameterizing Memory IP
Table 103.
Interface: emif_usr_clk_clock_source
Interface type: Clock Output
Signals in Interface
emif_usr_clk
Direction
Output
•
•
•
Availability
Description
DDR3, DDR4, LPDDR3, with
Hard PHY & Hard Controller
QDR II/II+/II+ Xtreme
QDR IV
The user clock output signal. The
clock frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_0
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
Table 104.
Interface: emif_usr_reset_reset_source
Interface type: Reset Output
Signals in Interface
emif_usr_reset_n
Direction
Output
•
•
•
Availability
Description
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
QDR II/II+/II+ Xtreme
QDR IV
The user reset output signal.
Asserted when the PLL becomes
unlocked or the PHY is reset.
Asynchronous assertion and
synchronous deassertion.
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_0
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
Table 105.
Interface: emif_usr_clk_sec_clock_source
Interface type: Clock Output
Signals in Interface
emif_usr_clk_sec
Direction
Output
•
Availability
Description
DDR3, DDR4, with Ping Pong
PHY
The user clock output signal. The
clock frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_1
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
Table 106.
Interface: emif_usr_reset_sec_reset_source
Interface type: Reset Output
Signals in Interface
emif_usr_reset_n_sec
Direction
Output
Availability
•
DDR3, DDR4, with Ping Pong
PHY
External Memory Interface Handbook Volume 2: Design Guidelines
258
Description
The user reset output signal.
Asserted when the PLL becomes
unlocked or the PHY is reset.
Asynchronous assertion and
synchronous deassertion.
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
Description
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_1
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
Table 107.
Interface: global_reset_reset_sink
Interface type: Reset Input
Signals in Interface
global_reset_n
Table 108.
Direction
Input
Availability
•
Core Clock Sharing=No
Sharing / Master
Description
The global reset input signal.
Asserting the global_reset_n
signal causes the external memory
interface IP to be reset and
recalibrated.
Connect this interface to the reset
output of the asynchronous or
synchronous reset source that
controls when the external
memory interface IP needs to be
reset and recalibrated.
Interface: hps_emif_conduit_end
Interface type: Conduit
Signals in Interface
Direction
hps_to_emif
Input
emif_to_hps
Output
Table 109.
Availability
•
Arria 10 EMIF for HPS IP
Description
The user interface signals between
the external memory interface IP
and the Hard Processor System
(HPS).
Connect this interface to the EMIF
conduit of the Arria 10 Hard
Processor System.
Interface: mem_conduit_end
Interface type: Conduit
The memory interface signals between the external memory interface IP and the external memory device.
Export this interface to the top level for I/O assignments. Typically mem_rm[0] and mem_rm[1] connect to
CS2# and CS3# of the memory buffer of all LRDIMM slots.
Signals in Interface
Direction
Availability
mem_ck
Output
Always available
mem_ck_n
Output
mem_reset_n
Output
mem_a
Output
mem_k_n
Output
•
QDR II
mem_ras_n
Output
•
DDR3
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
259
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
mem_cas_n
Output
mem_odt
Output
mem_dqs
Bidirectional
mem_dqs_n
Bidirectional
Availability
•
DDR3, DDR4, LPDDR3
mem_ba
Output
•
DDR3, DDR4, RLDRAM 3
mem_cs_n
Output
•
DDR3, DDR4, LPDDR3, RLDRAM 3
mem_dq
Bidirectional
mem_we_n
Output
•
DDR3, RLDRAM 3
mem_dm
Output
•
DDR3, LPDDR3, RLDRAM 3 with Enable DM pins=True
mem_rm
Output
•
DDR3, RLDRAM 3 with Memory format=LRDIMM and Number of
rank multiplication pins > 0
mem_par
Output
•
•
DDR3 with Memory format=RDIMM / LRDIMM
DDR4 with Enable alert_n/par pins=True
mem_alert_n
Input
mem_cke
Output
•
DDR3, DDR4, LPDDR3
mem_bg
Output
•
DDR4
mem_act_n
Output
mem_dbi_n
Bidirectional
•
DDR4 with Enable DM pins=True or Write DBI=True or Read
DBI=True
mem_k
Output
•
QDR II/II+/II+ Xtreme
mem_wps_n
Output
mem_rps_n
Output
mem_doff_n
Output
mem_d
Output
mem_q
Input
mem_cq
Input
mem_cq_n
Input
mem_bws_n
Output
mem_dk
Output
mem_dk_n
Output
mem_ref_n
Output
mem_qk
Input
•
QDR II/II+/II+ Xtreme with Enable BWS# pins=True
mem_qk_n
Input
•
RLDRAM 3
Output
•
QDR IV with Use Address Parity Bit=True
mem_pe_n
Input
•
QDR IV with Use Address Parity Bit=True
mem_ainv
Output
•
QDR IV with Address Bus Inversion=True
mem_lda_n
Output
•
QDR IV
mem_ap
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
260
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
mem_lda_b
Output
•
QDR IV
mem_rwa_n
Output
•
QDR IV
mem_rwb_n
Output
•
QDR IV
mem_cfg_n
Output
•
QDR IV
mem_lbk0_n
Output
•
QDR IV
mem_lbk1_n
Output
•
QDR IV
mem_dka
Output
•
QDR IV
mem_dka_n
Output
•
QDR IV
mem_dkb
Output
•
QDR IV
mem_dkb_n
Output
•
QDR IV
mem_qka
Input
•
QDR IV
mem_qka_n
Input
•
QDR IV
mem_qkb
Input
•
QDR IV
mem_qkb_n
Input
•
QDR IV
mem_dqa
Bidirectional
•
QDR IV
mem_dqb
Bidirectional
•
QDR IV
mem_dinva
Bidirectional
•
QDR IV with Data Bus Inversion=True
mem_dinvb
Bidirectional
•
QDR IV with Data Bus Inversion=True
Table 110.
Interface: oct_conduit_end
Interface type: Conduit
Signals in Interface
oct_rzqin
Table 111.
Input
Availability
Always available
Description
The On-Chip Termination (OCT)
RZQ reference resistor input
signal.
Export this interface to the top
level for I/O assignments.
Interface: pll_ref_clk_clock_sink
Signals in Interface
pll_ref_clk
Direction
Interface
Type
Direction
Clock Input
Input
Availability
•
Core clock
sharing=No Sharing /
Master
Description
The PLL reference clock
input signal.
Connect this interface to
the clock output of the
clock source that matches
the PLL reference clock
frequency value set in the
parameter editor.
External Memory Interface Handbook Volume 2: Design Guidelines
261
7 Implementing and Parameterizing Memory IP
Table 112.
Interface: status_conduit_end
Signals in Interface
local_cal_success
Interface
Type
Direction
Conduit
Output
Availability
Always available
Description
The PHY calibration status
output signals. When the
local_cal_fail
local_cal_success
signal is asserted, it
indicates that the PHY
calibration was successful.
Otherwise, if
local_cal_fail
signal is asserted, it
indicates that PHY
calibration has failed.
Connect this interface to
the conduit of the user
logic block that requires
the calibration status
information, or leave it
unconnected.
Related Links
http://www.alterawiki.com/wiki/Measuring_Channel_Signal_Integrity
7.4.2 Generated Files for Arria 10 External Memory Interface IP
When you complete the IP generation flow, there are generated files created in your
project directory. The directory structure created varies somewhat, depending on the
tool used to parameterize and generate the IP.
Note:
The PLL parameters are statically defined in the <variation_name>_parameters.tcl
at generation time. To ensure timing constraints and timing reports are correct, when
you edit the PLL parameters, apply those changes to the PLL parameters in this file.
The following table lists the generated directory structure and key files created when
generating the IP.
Table 113.
Generated Directory Structure and Key Files for Synthesis
Directory
File Name
Description
working_dir/
working_dir/<Top-level Name>/
The Qsys files for your IP component
or system based on your configuration.
working_dir/<Top-level Name>/
*.ppf
Pin Planner File for use with the Pin
Planner.
working_dir/<Top-level Name>/
synth/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated top-level wrapper for
synthesis.
working_dir/<Top-level Name>/
altera_emif_<acds version>/
synth/
*.v or (*.v and *.vhd)
Arria 10 EMIF (non-HPS) top-level
dynamic wrapper files for synthesis.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP core.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
262
7 Implementing and Parameterizing Memory IP
Directory
File Name
Description
working_dir/<Top-level Name>/
altera_emif_a10_hps_<acds
version>/synth/
*.v or (*.v and *.vhd)
Arria 10 EMIF for HPS top-level
dynamic wrapper files for synthesis.
working_dir/<Top-level Name>/
altera_emif_a10_hps_<acds
version>/synth/
*.sv, *.sdc, *.tcl and *.hex and
*_readme.txt
Arria 10 EMIF Core RTL, constraints
files, ROM content files and information
files for synthesis.
Whether the file type is set to Verilog
or VHDL, all the Arria 10 EMIF Core
RTL files will be generated as a
SystemVerilog file. The readme.txt file
contains information and guidelines
specific to your configuration.
working_dir/<Top-level Name>/
<other components>_<acds
version>/synth/
*
Other EMIF ECC, EMIF Debug Interface
IP or Merlin Interconnect component
files for synthesis.
Table 114.
Generated Directory Structure and Key Files for Simulation
Directory
File Name
Description
working_dir/<Top-level
Name>/sim/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated top-level wrapper for
simulation.
working_dir/<Top-level
Name>/sim/<simulator vendor>/
*.tcl, *cds.lib, *.lib, *.var,
*.sh, *.setup
Simulator-specific simulation scripts.
working_dir/<Top-level Name>/
altera_emif_<acds
version>/sim/
*.v or *.vhd
Arria 10 EMIF (non-HPS) top-level
dynamic wrapper files for simulation.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP cores.
working_dir/<Top-level Name>/
altera_emif_a10_hps_<acds
version>/sim/
*.v or *.vhd
Arria 10 EMIF for HPS top-level
dynamic wrapper files for simulation.
working_dir/<Top-level Name>/
altera_emif_arch_nf_<acds
version>/sim/
*sv or (*.sv and *.vhd), *.hex and
*_readme.txt
Arria 10 EMIF RTL, ROM content files,
and information files for simulation.
For SystemVerilog / Mix language
simulator, you may directly use the
files from this folder. For VHDL-only
simulator, other than the ROM content
files, you have to use files in
<current folder>/mentor
directory instead.
The readme.txt file contains
information and guidelines specific to
your configuration.
working_dir/<Top-level Name>/
<other components>_<acds
version>/sim/
Other EMIF ECC, EMIF Debug Interface
IP, or Merlin Interconnect component
files for simulation
External Memory Interface Handbook Volume 2: Design Guidelines
263
7 Implementing and Parameterizing Memory IP
Table 115.
Generated Directory Structure and Key Files for Qsys-Generated Testbench
System
Directory
File Name
Description
working_dir/<Top-level
Name>_tb/
*.qsys
The Qsys files for the QSYS generated
testbench system.
working_dir/<Top-level
Name>_tb/sim/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated testbench file for
simulation.
This wrapper instantiates BFM
components. For Arria 10 EMIF IP, this
module should instantiate the memory
model for the memory conduit being
exported from your created system.
working_dir/<Top-level
Name>_tb/<Top-level
Name>_<id>/sim/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated top-level wrapper for
simulation.
working_dir/<Top-level
Name>_tb/sim/<simulator
vendor>/
*.tcl, *cds.lib, *.lib, *.var,
*.sh, *.setup
Simulator-specific simulation scripts.
working_dir/<Top-level
Name>_tb/sim/<simulator
vendor>/
*.v or *.vhd
Arria 10 EMIF (non-HPS) top-level
dynamic wrapper files for simulation.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP cores.
working_dir/<Top-level
Name>_tb/
altera_emif_a10_hps_<acds
version>/sim/
*.v or *.vhd
Arria 10 EMIF for HPS top-level
dynamic wrapper files for simulation.
working_dir/<Top-level
Name>_tb/
altera_emif_arch_nf_<acds
version>/sim/
*sv or (*.sv and *.vhd), *.hex and
*_readme.txt
Arria 10 EMIF Core RTL, ROM content
files and information files for
simulation.
For SystemVerilog / Mix language
simulator, you may use the files from
this folder. For VHDL-only simulator,
other than the ROM content files, you
must use files in the <current
folder>/mentor directory instead.
The readme.txt file contains
information and guidelines specific to
your configuration.
working_dir/<Top-level
Name>_tb/sim/
altera_emif_arch_nf_<acds
version>/sim/mentor/
*.sv and *.vhd
Arria 10 EMIF Core RTL for simulation.
Only available when you create a VHDL
simulation model. All .sv files are
Mentor-tagged encrypted IP (IEEE
Encrypted Verilog) for VHDL-only
simulator support.
working_dir/<Top-level
Name>_tb/<other
components>_<acds
version>/sim/
*
Other EMIF ECC, EMIF Debug Interface
IP or Merlin Interconnect component
files for simulation.
working_dir/<Top-level
Name>_tb/<other
components>_<acds
version>/sim/mentor/
*
Other EMIF ECC, EMIF Debug Interface
IP or Merlin Interconnect component
files for simulation.
Only available depending on individual
component simulation model support
and when creating a VHDL simulation
model. All files in this folder are
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
264
7 Implementing and Parameterizing Memory IP
Directory
File Name
Description
Mentor-tagged encrypted IP (IEEE
Encrypted Verilog) for VHDL-only
simulator support.
Table 116.
Generated Directory Structure and Key Files for Example Simulation Designs
Directory
File Name
Description
working_dir/
*_example_design*/
*.qsys, *.tcl and readme.txt
Qsys files, generation scripts, and
information for generating the Arria 10
EMIF IP example design.
These files are available only when you
generate an example design. You may
open the .qsys file in Qsys to add more
components to the example design.
working_dir/
*_example_design*/sim/
ed_sim/sim/
*.tcl, *cds.lib, *.lib, *.var,
*.sh, *.setup
Simulator-specific simulation scripts.
working_dir/
*_example_design*/sim/
ed_sim/<simulator vendor>/
*.v or *.vhd
Qsys-generated top-level wrapper for
simulation.
working_dir/
*_example_design*/sim/ip/
ed_sim/ed_sim_emif_0/
altera_emif_<acds_version>/
simip/ed_sim/ed_sim_emif_0/
altera_emif_
*.v or *.vhd)
Arria 10 EMIF (non-HPS) top-level
dynamic wrapper files for simulation.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP cores.
working_dir/
*_example_design*/sim/ip/
ed_sim/ed_sim_emif_0/
altera_emif_arch_nf_<acds_ver
sion>/sim/
*sv or (*.sv and *.vhd), *.hex and
*_readme.txt
Arria 10 EMIF RTL, ROM content files,
and information files for simulation. For
SystemVerilog / Mix language
simulator, you may directly use the
files from this folder. For VHDL-only
simulator, other than the ROM content
files, you have to use files in
<current folder>/mentor
directory instead. The readme.txt
file contains information and guidelines
specific to your configuration.
working_dir/
*_example_design*/sim/ed_sim/
<other
components>_<acds_version>/si
m/
*
Other EMIF ECC, EMIF Debug Interface
IP, or Merlin Interconnect component
files for simulation
and
working_dir/
*_example_design*/sim/ip/
ed_sim/<other
components>/sim/
and
working_dir/
*_example_design*/sim/ip/
ed_sim/<other components>/
<other
components>_<acds_version>
External Memory Interface Handbook Volume 2: Design Guidelines
265
7 Implementing and Parameterizing Memory IP
Table 117.
Generated Directory Structure and Key Files for Example Synthesis Designs
Directory
File Name
Description
working_dir/
*_example_design*/
*.qsys, *.tcl and readme.txt
Qsys files, generation scripts, and
information for generating the Arria 10
EMIF IP example design.
These files are available only when you
generate an example design. You may
open the .qsys file in Qsys to add more
components to the example design.
working_dir/
*_example_design*/qii/
ed_synth/synth
*.v or (*.v and *.vhd)
Qsys-generated top-level wrapper for
synthesis.
working_dir/
*_example_design*/qii/ip/
ed_synth/ed_synth_emif_0/
altera_emif_<acds_version>/
synth
*.v or (*.v and *.vhd)
Arria 10 EMIF (non HPS) top-level
dynamic wrapper files for synthesis.
This wrapper instantiates the EMIF ECC
and EMIF debug interface core IP.
working_dir/
*_example_design*/qii/ip/
ed_synth/ed_synth_emif_0/
altera_emif_arch_nf_<acds_ver
sion>/synth/
*.sv, *.sdc, *.tcl, *.hex, and
*_readme.txt
Arria 10 EMIF Core RTL, constraints
files, ROM content files and information
files for synthesis.
Whether the file type is set to Verilog
or VHDL, all the Arria 10 EMIF Core
RTL files are generated as a
SystemVerilog file. The readme.txt file
contains information and guidelines
specific to your configuration.
working_dir/
*_example_design*/sim/
ed_synth/<other
components>_<acds_version>/
synth
*
Other EMIF ECC, EMIF debug interface
IP, or Merlin interconnect component
files for synthesis.
and
working_dir/
*_example_design*/sim/ip/
ed_synth/<other_components>/
synth/
and
working_dir/
*_example_design*/sim/ip/
ed_synth/<other_components>/
<other_components>_<acds_vers
ion>/synth
7.4.3 Arria 10 EMIF IP DDR4 Parameters
The Arria 10 EMIF IP parameter editor allows you to parameterize settings for the
Arria 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
The following tables describe the parameterization settings available in the parameter
editor for the Arria 10 EMIF IP.
External Memory Interface Handbook Volume 2: Design Guidelines
266
7 Implementing and Parameterizing Memory IP
7.4.3.1 Arria 10 EMIF IP DDR4 Parameters: General
Table 118.
Group: General / FPGA
Display Name
Speed grade
Table 119.
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
Group: General / Interface
Display Name
Identifier
Description
Configuration
PHY_CONFIG_ENUM
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
Table 120.
Group: General / Clocks
Display Name
Identifier
Description
Core clocks sharing
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Use recommended PLL
reference clock frequency
PHY_DDR4_DEFAULT_
REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
267
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
Table 121.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
PLL_USER_NUM_OF_E
XTRA_CLKS
Description
Specifies the number of additional output clocks to create
from the PLL.
7.4.3.2 Arria 10 EMIF IP DDR4 Parameters: Memory
Table 122.
Group: Memory / Topology
Display Name
Identifier
Description
DQS group of ALERT#
MEM_DDR4_ALERT_N
_DQS_GROUP
Select the DQS group with which the ALERT# pin is placed.
ALERT# pin placement
MEM_DDR4_ALERT_N
_PLACEMENT_ENUM
Specifies placement for the mem_alert_n signal. If you
select "I/O Lane with Address/Command Pins", you can pick
the I/O lane and pin index in the add/cmd bank with the
subsequent drop down menus. If you select "I/O Lane with
DQS Group", you can specify the DQS group with which to
place the mem_alert_n pin. If you select "Automatically
select a location", the IP automatically selects a pin for the
mem_alert_n signal. If you select this option, no additional
location constraints can be applied to the mem_alert_n pin,
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
268
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
or a fitter error will result during compilation. For optimum
signal integrity, you should choose "I/O Lane with Address/
Command Pins". For interfaces containing multiple memory
devices, it is recommended to connect the ALERT# pins
together to the ALERT#pin on the FPGA.
Enable ALERT#/PAR pins
MEM_DDR4_ALERT_P
AR_EN
Allows address/command calibration, which may provide
better margins on the address/command bus. The alert_n
signal is not accessible in the AFI or Avalon domains. This
means there is no way to know whether a parity error has
occurred during user mode. The parity pin is a dedicated pin
in the address/command bank, but the alert_n pin can be
placed in any bank that spans the memory interface. You
should explicitly choose the location of the alert_n pin and
place it in the address/command bank.
Bank address width
MEM_DDR4_BANK_AD
DR_WIDTH
Specifies the number of bank address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of bank
address pins needed for access to all available banks.
Bank group width
MEM_DDR4_BANK_GR
OUP_WIDTH
Specifies the number of bank group pins. Refer to the data
sheet for your memory device. The density of the selected
memory device determines the number of bank group pins
needed for access to all available bank groups.
Chip ID width
MEM_DDR4_CHIP_ID_
WIDTH
Specifies the number of chip ID pins. Only applicable to
registered and load-reduced DIMMs that use 3DS/TSV
memory devices.
Number of clocks
MEM_DDR4_CK_WIDT
H
Specifies the number of CK/CK# clock pairs exposed by the
memory interface. Usually more than 1 pair is required for
RDIMM/LRDIMM formats. The value of this parameter
depends on the memory device selected; refer to the data
sheet for your memory device.
Column address width
MEM_DDR4_COL_ADD
R_WIDTH
Specifies the number of column address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of address
pins needed for access to all available columns.
Number of chip selects per
DIMM
MEM_DDR4_CS_PER_
DIMM
Specifies the number of chip selects per DIMM.
Number of chip selects
MEM_DDR4_DISCRET
E_CS_WIDTH
Specifies the total number of chip selects in the interface,
up to a maximum of 4. This parameter applies to discrete
components only.
Data mask
MEM_DDR4_DM_EN
Indicates whether the interface uses data mask (DM) pins.
This feature allows specified portions of the data bus to be
written to memory (not available in x4 mode). One DM pin
exists per DQS group.
Number of DQS groups
MEM_DDR4_DQS_WI
DTH
Specifies the total number of DQS groups in the interface.
This value is automatically calculated as the DQ width
divided by the number of DQ pins per DQS group.
DQ pins per DQS group
MEM_DDR4_DQ_PER_
DQS
Specifies the total number of DQ pins per DQS group.
DQ width
MEM_DDR4_DQ_WIDT
H
Specifies the total number of data pins in the interface. The
maximum supported width is 144, or 72 in Ping Pong PHY
mode.
Memory format
MEM_DDR4_FORMAT_
ENUM
Specifies the format of the external memory device. The
following formats are supported: Component - a Discrete
memory device; UDIMM - Unregistered/Unbuffered DIMM
where address/control, clock, and data are unbuffered;
RDIMM - Registered DIMM where address/control and clock
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
269
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
are buffered; LRDIMM - Load Reduction DIMM where
address/control, clock, and data are buffered. LRDIMM
reduces the load to increase memory speed and supports
higher densities than RDIMM; SODIMM - Small Outline
DIMM is similar to UDIMM but smaller in size and is typically
used for systems with limited space. Some memory
protocols may not be available in all formats.
Number of DIMMs
MEM_DDR4_NUM_OF_
DIMMS
Total number of DIMMs.
Number of physical ranks per
DIMM
MEM_DDR4_RANKS_P
ER_DIMM
Number of ranks per DIMM. For LRDIMM, this represents
the number of physical ranks on the DIMM behind the
memory buffer
Read DBI
MEM_DDR4_READ_DB
I
Specifies whether the interface uses read data bus inversion
(DBI). Enable this feature for better signal integrity and
read margin. This feature is not available in x4
configurations.
Row address width
MEM_DDR4_ROW_AD
DR_WIDTH
Specifies the number of row address pins. Refer to the data
sheet for your memory device. The density of the selected
memory device determines the number of address pins
needed for access to all available rows.
Write DBI
MEM_DDR4_WRITE_D
BI
Indicates whether the interface uses write data bus
inversion (DBI). This feature provides better signal integrity
and write margin. This feature is unavailable if Data Mask is
enabled or in x4 mode.
Table 123.
Group: Memory / Latency and Burst
Display Name
Identifier
Description
Addr/CMD parity latency
MEM_DDR4_AC_PARIT
Y_LATENCY
Additional latency incurred by enabling address/command
parity check. Select a value to enable address/command
parity with the latency associated with the selected value.
Select Disable to disable address/command parity.
Memory additive CAS latency
setting
MEM_DDR4_ATCL_EN
UM
Determines the posted CAS additive latency of the memory
device. Enable this feature to improve command and bus
efficiency, and increase system bandwidth.
Burst Length
MEM_DDR4_BL_ENUM
Specifies the DRAM burst length which determines how
many consecutive addresses should be accessed for a given
read/write command.
Read Burst Type
MEM_DDR4_BT_ENUM
Indicates whether accesses within a given burst are in
sequential or interleaved order. Select sequential if you are
using the Intel-provided memory controller.
Memory CAS latency setting
MEM_DDR4_TCL
Specifies the number of clock cycles between the read
command and the availability of the first bit of output data
at the memory device. Overall read latency equals the
additive latency (AL) + the CAS latency (CL). Overall read
latency depends on the memory device selected; refer to
the datasheet for your device.
Memory write CAS latency
setting
MEM_DDR4_WTCL
Specifies the number of clock cycles from the release of
internal write to the latching of the first data in at the
memory device. This value depends on the memory device
selected; refer to the datasheet for your device.
External Memory Interface Handbook Volume 2: Design Guidelines
270
7 Implementing and Parameterizing Memory IP
Table 124.
Group: Memory / Mode Register Settings
Display Name
Identifier
Description
Auto self-refresh method
MEM_DDR4_ASR_ENU
M
Indicates whether to enable or disable auto self-refresh.
Auto self-refresh allows the controller to issue self-refresh
requests, rather than manually issuing self-refresh in order
for memory to retain data.
Fine granularity refresh
MEM_DDR4_FINE_GR
ANULARITY_REFRESH
Increased frequency of refresh in exchange for shorter
refresh. Shorter tRFC and increased cycle time can produce
higher bandwidth.
Internal VrefDQ monitor
MEM_DDR4_INTERNA
L_VREFDQ_MONITOR
Indicates whether to enable the internal VrefDQ monitor.
ODT input buffer during
powerdown mode
MEM_DDR4_ODT_IN_
POWERDOWN
Indicates whether to enable on-die termination (ODT) input
buffer during powerdown mode.
Read preamble
MEM_DDR4_READ_PR
EAMBLE
Number of read preamble cycles. This mode register setting
determines the number of cycles DQS (read) will go low
before starting to toggle.
Self refresh abort
MEM_DDR4_SELF_RFS
H_ABORT
Self refresh abort for latency reduction.
Temperature controlled refresh
enable
MEM_DDR4_TEMP_CO
NTROLLED_RFSH_ENA
Indicates whether to enable temperature controlled refresh,
which allows the device to adjust the internal refresh period
to be longer than tREFI of the normal temperature range by
skipping external refresh commands.
Temperature controlled refresh
range
MEM_DDR4_TEMP_CO
NTROLLED_RFSH_RAN
GE
Indicates temperature controlled refresh range where
normal temperature mode covers 0C to 85C and extended
mode covers 0C to 95C.
Write preamble
MEM_DDR4_WRITE_P
REAMBLE
Write preamble cycles.
7.4.3.3 Arria 10 EMIF IP DDR4 Parameters: Mem I/O
Table 125.
Group: Mem I/O / Memory I/O Settings
Display Name
Identifier
Description
DB Host Interface DQ Driver
MEM_DDR4_DB_DQ_
DRV_ENUM
Specifies the driver impedance setting for the host interface
of the data buffer. This parameter determines the value of
the control word BC03 of the data buffer. Perform board
simulation to obtain the optimal value for this setting.
DB Host Interface DQ
RTT_NOM
MEM_DDR4_DB_RTT_
NOM_ENUM
Specifies the RTT_NOM setting for the host interface of the
data buffer. Only "RTT_NOM disabled" is supported. This
parameter determines the value of the control word BC00 of
the data buffer.
DB Host Interface DQ
RTT_PARK
MEM_DDR4_DB_RTT_
PARK_ENUM
Specifies the RTT_PARK setting for the host interface of the
data buffer. This parameter determines the value of control
word BC02 of the data buffer. Perform board simulation to
obtain the optimal value for this setting.
DB Host Interface DQ RTT_WR
MEM_DDR4_DB_RTT_
WR_ENUM
Specifies the RTT_WR setting of the host interface of the
data buffer. This parameter determines the value of the
control word BC01 of the data buffer. Perform board
simulation to obtain the optimal value for this setting.
Use recommended initial
VrefDQ value
MEM_DDR4_DEFAULT
_VREFOUT
Specifies to use the recommended initial VrefDQ value. This
value is used as a starting point and may change after
calibration.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
271
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Output drive strength setting
MEM_DDR4_DRV_STR
_ENUM
Specifies the output driver impedance setting at the
memory device. To obtain optimum signal integrity
performance, select option based on board simulation
results.
RCD CA Input Bus Termination
MEM_DDR4_RCD_CA_
IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DA0..DA17, DBA0..DBA1,
DBG0..DBG1, DACT_n, DC2, DPAR. This parameter
determines the value of bits DA[1:0] of control word RC7x
of the registering clock driver. Perform board simulation to
obtain the optimal value for this setting.
RCD DCKE Input Bus
Termination
MEM_DDR4_RCD_CKE
_IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DCKE0, DCKE1. This
parameter determines the value of bits DA[5:4] of control
word RC7x of the registering clock driver. Perform board
simulation to obtain the optimal value for this setting.
RCD DCS[3:0]_n Input Bus
Termination
MEM_DDR4_RCD_CS_
IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DCS[3:0]_n. This parameter
determines the value of bits DA[3:2] of control word RC7x
of the registering clock driver. Perform board simulation to
obtain the optimal value for this setting.
RCD DODT Input Bus
Termination
MEM_DDR4_RCD_ODT
_IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DODT0, DODT1. This
parameter determines the value of bits DA[7:6] of control
word RC7x of the registering clock driver. Perform board
simulation to obtain the optimal value for this setting.
ODT Rtt nominal value
MEM_DDR4_RTT_NOM
_ENUM
Determines the nominal on-die termination value applied to
the DRAM. The termination is applied any time that ODT is
asserted. If you specify a different value for RTT_WR, that
value takes precedence over the values mentioned here. For
optimum signal integrity performance, select your option
based on board simulation results.
RTT PARK
MEM_DDR4_RTT_PAR
K
If set, the value is applied when the DRAM is not being
written AND ODT is not asserted HIGH.
Dynamic ODT (Rtt_WR) value
MEM_DDR4_RTT_WR_
ENUM
Specifies the mode of the dynamic on-die termination (ODT)
during writes to the memory device (used for multi-rank
configurations). For optimum signal integrity performance,
select this option based on board simulation results.
RCD and DB Manufacturer
(LSB)
MEM_DDR4_SPD_133
_RCD_DB_VENDOR_L
SB
Specifies the LSB of the ID code of the registering clock
driver and data buffer manufacturer. The value must come
from Byte 133 of the SPD from the DIMM vendor.
RCD and DB Manufacturer
(MSB)
MEM_DDR4_SPD_134
_RCD_DB_VENDOR_M
SB
Specifies the MSB of the ID code of the registering clock
driver and data buffer manufacturer. The value must come
from Byte 134 of the SPD from the DIMM vendor.
RCD Revision Number
MEM_DDR4_SPD_135
_RCD_REV
Specifies the die revision of the registering clock driver. The
value must come from Byte 135 of the SPD from the DIMM
vendor.
SPD Byte 137 - RCD Drive
Strength for Command/
Address
MEM_DDR4_SPD_137
_RCD_CA_DRV
Specifies the drive strength of the registering clock driver's
control and command/address outputs to the DRAM. The
value must come from Byte 137 of the SPD from the DIMM
vendor.
SPD Byte 138 - RCD Drive
Strength for CK
MEM_DDR4_SPD_138
_RCD_CK_DRV
Specifies the drive strength of the registering clock driver's
clock outputs to the DRAM. The value must come from Byte
138 of the SPD from the DIMM vendor.
DB Revision Number
MEM_DDR4_SPD_139
_DB_REV
Specifies the die revision of the data buffer. The value must
come from Byte 139 of the SPD from the DIMM vendor.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
272
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
SPD Byte 140 - DRAM VrefDQ
for Package Rank 0
MEM_DDR4_SPD_140
_DRAM_VREFDQ_R0
Specifies the VrefDQ setting for package rank 0 of an
LRDIMM. The value must come from Byte 140 of the SPD
from the DIMM vendor.
SPD Byte 141 - DRAM VrefDQ
for Package Rank 1
MEM_DDR4_SPD_141
_DRAM_VREFDQ_R1
Specifies the VrefDQ setting for package rank 1 of an
LRDIMM. The value must come from Byte 141 of the SPD
from the DIMM vendor.
SPD Byte 142 - DRAM VrefDQ
for Package Rank 2
MEM_DDR4_SPD_142
_DRAM_VREFDQ_R2
Specifies the VrefDQ setting for package rank 2 (if it exists)
of an LRDIMM. The value must come from Byte 142 of the
SPD from the DIMM vendor.
SPD Byte 143 - DRAM VrefDQ
for Package Rank 3
MEM_DDR4_SPD_143
_DRAM_VREFDQ_R3
Specifies the VrefDQ setting for package rank 3 (if it exists)
of an LRDIMM. The value must come from Byte 143 of the
SPD from the DIMM vendor.
SPD Byte 144 - DB VrefDQ for
DRAM Interface
MEM_DDR4_SPD_144
_DB_VREFDQ
Specifies the VrefDQ setting of the data buffer's DRAM
interface. The value must come from Byte 144 of the SPD
from the DIMM vendor.
SPD Byte 145-147 - DB MDQ
Drive Strength and RTT
MEM_DDR4_SPD_145
_DB_MDQ_DRV
Specifies the drive strength of the MDQ pins of the data
buffer's DRAM interface. The value must come from either
Byte 145 (data rate = 1866), 146 (1866 data rate = 2400),
or 147 (2400 data rate = 3200) of the SPD from the DIMM
vendor.
SPD Byte 148 - DRAM Drive
Strength
MEM_DDR4_SPD_148
_DRAM_DRV
Specifies the drive strength of the DRAM. The value must
come from Byte 148 of the SPD from the DIMM vendor.
SPD Byte 149-151 - DRAM ODT
(RTT_WR and RTT_NOM)
MEM_DDR4_SPD_149
_DRAM_RTT_WR_NOM
Specifies the RTT_WR and RTT_NOM setting of the DRAM.
The value must come from either Byte 149 (data rate =
1866), 150 (1866 data rate = 2400), or 151 (2400 data
rate = 3200) of the SPD from the DIMM vendor.
SPD Byte 152-154 - DRAM ODT
(RTT_PARK)
MEM_DDR4_SPD_152
_DRAM_RTT_PARK
Specifies the RTT_PARK setting of the DRAM. The value
must come from either Byte 152 (data rate = 1866), 153
(1866 data rate = 2400), or 154 (2400 data rate = 3200) of
the SPD from the DIMM vendor.
VrefDQ training range
MEM_DDR4_VREFDQ_
TRAINING_RANGE
VrefDQ training range.
VrefDQ training value
MEM_DDR4_VREFDQ_
TRAINING_VALUE
VrefDQ training value.
Table 126.
Group: Mem I/O / ODT Activation
Display Name
Use Default ODT Assertion
Tables
Identifier
Description
MEM_DDR4_USE_DEF
AULT_ODT
Enables the default ODT assertion pattern as determined
from vendor guidelines. These settings are provided as a
default only; you should simulate your memory interface to
determine the optimal ODT settings and assertion patterns.
7.4.3.4 Arria 10 EMIF IP DDR4 Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
External Memory Interface Handbook Volume 2: Design Guidelines
273
7 Implementing and Parameterizing Memory IP
Table 127.
Group: FPGA IO / FPGA IO Settings
Display Name
Identifier
Description
Use default I/O settings
PHY_DDR4_DEFAULT_
IO
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_DDR4_IO_VOLTA
GE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
Table 128.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_DDR4_USER_AC
_IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_DDR4_USER_AC
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR4_USER_AC
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 129.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_DDR4_USER_CK
_IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_DDR4_USER_CK
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR4_USER_CK
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
External Memory Interface Handbook Volume 2: Design Guidelines
274
7 Implementing and Parameterizing Memory IP
Table 130.
Group: FPGA IO / Data Bus
Display Name
Identifier
Use recommended initial Vrefin
PHY_DDR4_USER_AU
TO_STARTING_VREFI
N_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_DDR4_USER_DA
TA_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_DDR4_USER_DA
TA_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_DDR4_USER_DA
TA_OUT_MODE_ENUM
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_DDR4_USER_STA
RTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 131.
Description
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_DDR4_USER_PLL
_REF_CLK_IO_STD_E
NUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_DDR4_USER_RZ
Q_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
7.4.3.5 Arria 10 EMIF IP DDR4 Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 132.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Speed bin
MEM_DDR4_SPEEDBI
N_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
TdiVW_total
MEM_DDR4_TDIVW_T
OTAL_UI
TdiVW_total describes the minimum horizontal width of the
DQ eye opening required by the receiver (memory device/
DIMM). It is measured in UI (1UI = half the memory clock
period).
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
275
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tDQSCK
MEM_DDR4_TDQSCK_
PS
tDQSCK describes the skew between the memory clock (CK)
and the input data strobes (DQS) used for reads. It is the
time between the rising data strobe edge (DQS, DQS#)
relative to the rising CK edge.
tDQSQ
MEM_DDR4_TDQSQ_U
I
tDQSQ describes the latest valid transition of the associated
DQ pins for a READ. tDQSQ specifically refers to the DQS,
DQS# to DQ skew. It is the length of time between the
DQS, DQS# crossing to the last valid transition of the
slowest DQ pin in the DQ group associated with that DQS
strobe.
tDQSS
MEM_DDR4_TDQSS_C
YC
tDQSS describes the skew between the memory clock (CK)
and the output data strobes used for writes. It is the time
between the rising data strobe edge (DQS, DQS#) relative
to the rising CK edge.
tDSH
MEM_DDR4_TDSH_CY
C
tDSH specifies the write DQS hold time. This is the time
difference between the rising CK edge and the falling edge
of DQS, measured as a percentage of tCK.
tDSS
MEM_DDR4_TDSS_CY
C
tDSS describes the time between the falling edge of DQS to
the rising edge of the next CK transition.
tIH (base) DC level
MEM_DDR4_TIH_DC_
MV
tIH (base) DC level refers to the voltage level which the
address/command signal must not cross during the hold
window. The signal is considered stable only if it remains
above this voltage level (for a logic 1) or below this voltage
level (for a logic 0) for the entire hold period.
tIH (base)
MEM_DDR4_TIH_PS
tIH (base) refers to the hold time for the Address/Command
(A) bus after the rising edge of CK. Depending on what AC
level the user has chosen for a design, the hold margin can
vary (this variance will be automatically determined when
the user choses the "tIH (base) AC level").
tINIT
MEM_DDR4_TINIT_US
tINIT describes the time duration of the memory
initialization after a device power-up. After RESET_n is deasserted, wait for another 500us until CKE becomes active.
During this time, the DRAM will start internal initialization;
this will be done independently of external clocks.
tIS (base) AC level
MEM_DDR4_TIS_AC_
MV
tIS (base) AC level refers to the voltage level which the
address/command signal must cross and remain above
during the setup margin window. The signal is considered
stable only if it remains above this voltage level (for a logic
1) or below this voltage level (for a logic 0) for the entire
setup period.
tIS (base)
MEM_DDR4_TIS_PS
tIS (base) refers to the setup time for the Address/
Command/Control (A) bus to the rising edge of CK.
tMRD
MEM_DDR4_TMRD_CK
_CYC
The mode register set command cycle time, tMRD is the
minimum time period required between two MRS
commands.
tQH
MEM_DDR4_TQH_UI
tQH specifies the output hold time for the DQ in relation to
DQS, DQS#. It is the length of time between the DQS,
DQS# crossing to the earliest invalid transition of the
fastest DQ pin in the DQ group associated with that DQS
strobe.
tQSH
MEM_DDR4_TQSH_CY
C
tQSH refers to the differential High Pulse Width, which is
measured as a percentage of tCK. It is the time during
which the DQS is high for a read.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
276
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tRAS
MEM_DDR4_TRAS_NS
tRAS describes the activate to precharge duration. A row
cannot be deactivated until the tRAS time has been met.
Therefore tRAS determines how long the memory has to
wait after a activate command before a precharge command
can be issued to close the row.
tRCD
MEM_DDR4_TRCD_NS
tRCD, row command delay, describes the amount of delay
between the activation of a row through the RAS command
and the access to the data through the CAS command.
tRP
MEM_DDR4_TRP_NS
tRP refers to the Precharge (PRE) command period. It
describes how long it takes for the memory to disable
access to a row by precharging and before it is ready to
activate a different row.
tWLH
MEM_DDR4_TWLH_PS
tWLH describes the write leveling hold time from the rising
edge of DQS to the rising edge of CK.
tWLS
MEM_DDR4_TWLS_PS
tWLS describes the write leveling setup time. It is measured
from the rising edge of CK to the rising edge of DQS.
tWR
MEM_DDR4_TWR_NS
tWR refers to the Write Recovery time. It specifies the
amount of clock cycles needed to complete a write before a
precharge command can be issued.
VdiVW_total
MEM_DDR4_VDIVW_T
OTAL
VdiVW_total describes the Rx Mask voltage, or the minimum
vertical width of the DQ eye opening required by the
receiver (memory device/DIMM). It is measured mV.
Table 133.
Group: Mem Timing / Parameters dependent on Speed Bin, Operating
Frequency, and Page Size
Display Name
Identifier
Description
tCCD_L
MEM_DDR4_TCCD_L_
CYC
tCCD_L refers to the CAS_n-to-CAS_n delay (long). It is the
minimum time interval between two read/write (CAS)
commands to the same bank group.
tCCD_S
MEM_DDR4_TCCD_S_
CYC
tCCD_S refers to the CAS_n-to-CAS_n delay (short). It is
the minimum time interval between two read/write (CAS)
commands to different bank groups.
tFAW_dlr
MEM_DDR4_TFAW_DL
R_CYC
tFAW_dlr refers to the four activate window to different
logical ranks. It describes the period of time during which
only four banks can be active across all logical ranks within
a 3DS DDR4 device.
tFAW
MEM_DDR4_TFAW_NS
tFAW refers to the four activate window time. It describes
the period of time during which only four banks can be
active.
tRRD_dlr
MEM_DDR4_TRRD_DL
R_CYC
tRRD_dlr refers to the Activate to Activate Command Period
to Different Logical Ranks. It is the minimum time interval
(measured in memory clock cycles) between two activate
commands to different logical ranks within a 3DS DDR4
device.
tRRD_L
MEM_DDR4_TRRD_L_
CYC
tRRD_L refers to the Activate to Activate Command Period
(long). It is the minimum time interval (measured in
memory clock cycles) between two activate commands to
the same bank group.
tRRD_S
MEM_DDR4_TRRD_S_
CYC
tRRD_S refers to the Activate to Activate Command Period
(short). It is the minimum time interval between two
activate commands to the different bank groups.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
277
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tRTP
MEM_DDR4_TRTP_CY
C
tRTP refers to the internal READ Command to PRECHARGE
Command delay. It is the number of memory clock cycles
that is needed between a read command and a precharge
command to the same rank.
tWTR_L
MEM_DDR4_TWTR_L_
CYC
tWTR_L or Write Timing Parameter describes the delay from
start of internal write transaction to internal read command,
for accesses to the same bank group. The delay is
measured from the first rising memory clock edge after the
last write data is received to the rising memory clock edge
when a read command is received.
tWTR_S
MEM_DDR4_TWTR_S_
CYC
tWTR_S or Write Timing Parameter describes the delay from
start of internal write transaction to internal read command,
for accesses to the different bank group. The delay is
measured from the first rising memory clock edge after the
last write data is received to the rising memory clock edge
when a read command is received.
Table 134.
Group: Mem Timing / Parameters dependent on Density and Temperature
Display Name
Identifier
Description
tREFI
MEM_DDR4_TREFI_US
tREFI refers to the average periodic refresh interval. It is
the maximum amount of time the memory can tolerate in
between each refresh command
tRFC_dlr
MEM_DDR4_TRFC_DL
R_NS
tRFC_dlr refers to the Refresh Cycle Time to different logical
rank. It is the amount of delay after a refresh command to
one logical rank before an activate command can be
accepted by another logical rank within a 3DS DDR4 device.
This parameter is dependent on the memory density and is
necessary for proper hardware functionality.
tRFC
MEM_DDR4_TRFC_NS
tRFC refers to the Refresh Cycle Time. It is the amount of
delay after a refresh command before an activate command
can be accepted by the memory. This parameter is
dependent on the memory density and is necessary for
proper hardware functionality.
7.4.3.6 Arria 10 EMIF IP DDR4 Parameters: Board
Table 135.
Group: Board / Intersymbol Interference/Crosstalk
Display Name
Identifier
Description
Address and command ISI/
crosstalk
BOARD_DDR4_USER_
AC_ISI_NS
The address and command window reduction due to ISI and
crosstalk effects. The number to be entered is the total loss
of margin on both the setup and hold sides (measured loss
on the setup side + measured loss on the hold side). Refer
to the EMIF Simulation Guidance wiki page for additional
information.
Read DQS/DQS# ISI/crosstalk
BOARD_DDR4_USER_
RCLK_ISI_NS
The reduction of the read data window due to ISI and
crosstalk effects on the DQS/DQS# signal when driven by
the memory device during a read. The number to be
entered is the total loss of margin on the setup and hold
sides (measured loss on the setup side + measured loss on
the hold side). Refer to the EMIF Simulation Guidance wiki
page for additional information.
Read DQ ISI/crosstalk
BOARD_DDR4_USER_
RDATA_ISI_NS
The reduction of the read data window due to ISI and
crosstalk effects on the DQ signal when driven by the
memory device during a read. The number to be entered is
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
278
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
the total loss of margin on the setup and hold side
(measured loss on the setup side + measured loss on the
hold side). Refer to the EMIF Simulation Guidance wiki page
for additional information.
Write DQS/DQS# ISI/crosstalk
BOARD_DDR4_USER_
WCLK_ISI_NS
The reduction of the write data window due to ISI and
crosstalk effects on the DQS/DQS# signal when driven by
the FPGA during a write. The number to be entered is the
total loss of margin on the setup and hold sides (measured
loss on the setup side + measured loss on the hold side).
Refer to the EMIF Simulation Guidance wiki page for
additional information.
Write DQ ISI/crosstalk
BOARD_DDR4_USER_
WDATA_ISI_NS
The reduction of the write data window due to ISI and
crosstalk effects on the DQ signal when driven by the FPGA
during a write. The number to be entered is the total loss of
margin on the setup and hold sides (measured loss on the
setup side + measured loss on the hold side). Refer to the
EMIF Simulation Guidance wiki page for additional
information.
Use default ISI/crosstalk
values
BOARD_DDR4_USE_D
EFAULT_ISI_VALUES
You can enable this option to use default intersymbol
interference and crosstalk values for your topology. Note
that the default values are not optimized for your board. For
optimal signal integrity, it is recommended that you do not
enable this parameter, but instead perform I/O simulation
using IBIS models and Hyperlynx*, and manually enter
values based on your simulation results, instead of using
the default values.
Table 136.
Group: Board / Board and Package Skews
Display Name
Identifier
Description
Average delay difference
between address/command
and CK
BOARD_DDR4_AC_TO
_CK_SKEW_NS
The average delay difference between the address/
command signals and the CK signal, calculated by averaging
the longest and smallest address/command signal trace
delay minus the maximum CK trace delay. Positive values
represent address and command signals that are longer
than CK signals and negative values represent address and
command signals that are shorter than CK signals.
Maximum board skew within
address/command bus
BOARD_DDR4_BRD_S
KEW_WITHIN_AC_NS
The largest skew between the address and command
signals.
Maximum board skew within
DQS group
BOARD_DDR4_BRD_S
KEW_WITHIN_DQS_N
S
The largest skew between all DQ and DM pins in a DQS
group. This value affects the read capture and write
margins.
Average delay difference
between DQS and CK
BOARD_DDR4_DQS_T
O_CK_SKEW_NS
The average delay difference between the DQS signals and
the CK signal, calculated by averaging the longest and
smallest DQS trace delay minus the CK trace delay. Positive
values represent DQS signals that are longer than CK
signals and negative values represent DQS signals that are
shorter than CK signals.
Package deskewed with board
layout (address/command
bus)
BOARD_DDR4_IS_SK
EW_WITHIN_AC_DES
KEWED
Enable this parameter if you are compensating for package
skew on the address, command, control, and memory clock
buses in the board layout. Include package skew in
calculating the following board skew parameters.
Package deskewed with board
layout (DQS group)
BOARD_DDR4_IS_SK
EW_WITHIN_DQS_DE
SKEWED
Enable this parameter if you are compensating for package
skew on the DQ, DQS, and DM buses in the board layout.
Include package skew in calculating the following board
skew parameters.
Maximum CK delay to DIMM/
device
BOARD_DDR4_MAX_C
K_DELAY_NS
The delay of the longest CK trace from the FPGA to any
DIMM/device.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
279
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Maximum DQS delay to DIMM/
device
BOARD_DDR4_MAX_D
QS_DELAY_NS
The delay of the longest DQS trace from the FPGA to any
DIMM/device
Maximum delay difference
between DIMMs/devices
BOARD_DDR4_SKEW_
BETWEEN_DIMMS_NS
The largest propagation delay on DQ signals between ranks
(applicable only when there is more than one rank). For
example: when you configure two ranks using one DIMM
there is a short distance between the ranks for the same DQ
pin; when you implement two ranks using two DIMMs the
distance is larger.
Maximum skew between DQS
groups
BOARD_DDR4_SKEW_
BETWEEN_DQS_NS
The largest skew between DQS signals.
7.4.3.7 Arria 10 EMIF IP DDR4 Parameters: Controller
Table 137.
Group: Controller / Low Power Mode
Display Name
Identifier
Description
Auto Power-Down Cycles
CTRL_DDR4_AUTO_P
OWER_DOWN_CYCS
Specifies the number of idle controller cycles after which the
memory device is placed into power-down mode. You can
configure the idle waiting time. The supported range for
number of cycles is from 1 to 65534.
Enable Auto Power-Down
CTRL_DDR4_AUTO_P
OWER_DOWN_EN
Enable this parameter to have the controller automatically
place the memory device into power-down mode after a
specified number of idle controller clock cycles. The idle wait
time is configurable. All ranks must be idle to enter auto
power-down.
Table 138.
Group: Controller / Efficiency
Display Name
Identifier
Description
Address Ordering
CTRL_DDR4_ADDR_O
RDER_ENUM
Controls the mapping between Avalon addresses and
memory device addresses. By changing the value of this
parameter, you can change the mappings between the
Avalon-MM address and the DRAM address. (CS = chip
select, CID = chip ID in 3DS/TSV devices, BG = bank group
address, Bank = bank address, Row = row address, Col =
column address)
Enable Auto-Precharge Control
CTRL_DDR4_AUTO_PR
ECHARGE_EN
Select this parameter to enable the auto-precharge control
on the controller top level. If you assert the auto-precharge
control signal while requesting a read or write burst, you
can specify whether the controller should close (autoprecharge) the currently open page at the end of the read
or write burst, potentially making a future access to a
different page of the same bank faster.
Enable Reordering
CTRL_DDR4_REORDE
R_EN
Enable this parameter to allow the controller to perform
command and data reordering. Reordering can improve
efficiency by reducing bus turnaround time and row/bank
switching time. Data reordering allows the single-port
memory controller to change the order of read and write
commands to achieve highest efficiency. Command
reordering allows the controller to issue bank management
commands early based on incoming patterns, so that the
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
280
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
desired row in memory is already open when the command
reaches the memory interface. For more information, refer
to the Data Reordering topic in the EMIF Handbook.
Starvation limit for each
command
CTRL_DDR4_STARVE_
LIMIT
Specifies the number of commands that can be served
before a waiting command is served. The controller employs
a counter to ensure that all requests are served after a predefined interval -- this ensures that low priority requests are
not ignored, when doing data reordering for efficiency. The
valid range for this parameter is from 1 to 63. For more
information, refer to the Starvation Control topic in the EMIF
Handbook.
Enable Command Priority
Control
CTRL_DDR4_USER_PR
IORITY_EN
Select this parameter to enable user-requested command
priority control on the controller top level. This parameter
instructs the controller to treat a read or write request as
high-priority. The controller attempts to fill high-priority
requests sooner, to reduce latency. Connect this interface to
the conduit of your logic block that determines when the
external memory interface IP treats the read or write
request as a high-priority command.
Table 139.
Group: Controller / Configuration, Status, and Error Handling
Display Name
Identifier
Description
Enable Auto Error Correction
CTRL_DDR4_ECC_AUT
O_CORRECTION_EN
Specifies that the controller perform auto correction when a
single-bit error is detected by the ECC logic.
Enable Error Detection and
Correction Logic with ECC
CTRL_DDR4_ECC_EN
Enables error-correction code (ECC) for single-bit error
correction and double-bit error detection. Your memory
interface must have a width of 16, 24, 40, or 72 bits to use
ECC. ECC is implemented as soft logic.
Enable Memory-Mapped
Configuration and Status
Register (MMR) Interface
CTRL_DDR4_MMR_EN
Enable this parameter to change or read memory timing
parameters, memory address size, mode register settings,
controller status, and request sideband operations.
Table 140.
Group: Controller / Data Bus Turnaround Time
Display Name
Identifier
Description
Additional read-to-read
turnaround time (different
ranks)
CTRL_DDR4_RD_TO_
RD_DIFF_CHIP_DELTA
_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a read of another logical rank. This can
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (different
ranks)
CTRL_DDR4_RD_TO_
WR_DIFF_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (same rank)
CTRL_DDR4_RD_TO_
WR_SAME_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read to a write
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
281
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Additional write-to-read
turnaround time (different
ranks)
CTRL_DDR4_WR_TO_
RD_DIFF_CHIP_DELTA
_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a read of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional write-to-read
turnaround time (same rank)
CTRL_DDR4_WR_TO_
RD_SAME_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write to a read
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
Additional write-to-write
turnaround time (different
ranks)
CTRL_DDR4_WR_TO_
WR_DIFF_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
7.4.3.8 Arria 10 EMIF IP DDR4 Parameters: Diagnostics
Table 141.
Group: Diagnostics / Simulation Options
Display Name
Identifier
Description
Abstract phy for fast simulation
DIAG_DDR4_ABSTRA
CT_PHY
Specifies that the system use Abstract PHY for simulation.
Abstract PHY replaces the PHY with a model for fast
simulation and can reduce simulation time by 2-3 times.
Abstract PHY is available for certain protocols and device
families, and only when you select Skip Calibration.
Calibration mode
DIAG_SIM_CAL_MODE
_ENUM
Specifies whether to skip memory interface calibration
during simulation, or to simulate the full calibration process.
Simulating the full calibration process can take hours (or
even days), depending on the width and depth of the
memory interface. You can achieve much faster simulation
times by skipping the calibration process, but that is only
expected to work when the memory model is ideal and the
interconnect delays are zero. If you enable this parameter,
the interface still performs some memory initialization
before starting normal operations. Abstract PHY is
supported with skip calibration.
Table 142.
Group: Diagnostics / Calibration Debug Options
Display Name
Identifier
Description
Skip address/command
deskew calibration
DIAG_DDR4_SKIP_CA
_DESKEW
Specifies to skip the address/command deskew calibration
stage. Address/command deskew performs per-bit deskew
for the address and command pins.
Skip address/command
leveling calibration
DIAG_DDR4_SKIP_CA
_LEVEL
Specifies to skip the address/command leveling stage
during calibration. Address/command leveling attempts to
center the memory clock edge against CS# by adjusting
delay elements inside the PHY, and then applying the same
delay offset to the rest of the address and command pins.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
282
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Skip VREF calibration
DIAG_DDR4_SKIP_VR
EF_CAL
Specifies to skip the VREF stage of calibration. Enable this
parameter for debug purposes only; generally, you should
include the VREF calibration stage during normal operation.
Enable Daisy-Chaining for
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_MASTER
Specifies that the IP export an Avalon-MM master interface
(cal_debug_out) which can connect to the cal_debug
interface of other EMIF cores residing in the same I/O
column. This parameter applies only if the EMIF Debug
Toolkit or On-Chip Debug Port is enabled. Refer to the
Debugging Multiple EMIFs wiki page for more information
about debugging multiple EMIFs.
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_SLAVE
Specifies the connectivity of an Avalon slave interface for
use by the Quartus Prime EMIF Debug Toolkit or user core
logic. If you set this parameter to "Disabled," no debug
features are enabled. If you set this parameter to "Export,"
an Avalon slave interface named "cal_debug" is exported
from the IP. To use this interface with the EMIF Debug
Toolkit, you must instantiate and connect an EMIF debug
interface IP core to it, or connect it to the cal_debug_out
interface of another EMIF core. If you select "Add EMIF
Debug Interface", an EMIF debug interface component
containing a JTAG Avalon Master is connected to the debug
port, allowing the core to be accessed by the EMIF Debug
Toolkit. Only one EMIF debug interface should be
instantiated per I/O column. You can chain additional EMIF
or PHYLite cores to the first by enabling the "Enable DaisyChaining for Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option for all cores in the chain, and selecting
"Export" for the "Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option on all cores after the first.
Interface ID
DIAG_INTERFACE_ID
Identifies interfaces within the I/O column, for use by the
EMIF Debug Toolkit and the On-Chip Debug Port. Interface
IDs should be unique among EMIF cores within the same
I/O column. If the Quartus Prime EMIF Debug Toolkit/OnChip Debug Port parameter is set to Disabled, the interface
ID is unused.
Use Soft NIOS Processor for
On-Chip Debug
DIAG_SOFT_NIOS_M
ODE
Enables a soft Nios processor as a peripheral component to
access the On-Chip Debug Port. Only one interface in a
column can activate this option.
Table 143.
Group: Diagnostics / Example Design
Display Name
Identifier
Description
Enable In-System-Sourcesand-Probes
DIAG_EX_DESIGN_IS
SP_EN
Enables In-System-Sources-and-Probes in the example
design for common debug signals, such as calibration status
or example traffic generator per-bit status. This parameter
must be enabled if you want to do driver margining.
Number of core clocks sharing
slaves to instantiate in the
example design
DIAG_EX_DESIGN_NU
M_OF_SLAVES
Specifies the number of core clock sharing slaves to
instantiate in the example design. This parameter applies
only if you set the "Core clocks sharing" parameter in the
"General" tab to Master or Slave.
External Memory Interface Handbook Volume 2: Design Guidelines
283
7 Implementing and Parameterizing Memory IP
Table 144.
Group: Diagnostics / Traffic Generator
Display Name
Identifier
Description
Bypass the default traffic
pattern
DIAG_BYPASS_DEFAU
LT_PATTERN
Specifies that the controller/interface bypass the traffic
generator 2.0 default pattern after reset. If you do not
enable this parameter, the traffic generator does not assert
a pass or fail status until the generator is configured and
signaled to start by its Avalon configuration interface.
Bypass the traffic generator
repeated-writes/repeatedreads test pattern
DIAG_BYPASS_REPEA
T_STAGE
Specifies that the controller/interface bypass the traffic
generator's repeat test stage. If you do not enable this
parameter, every write and read is repeated several times.
Bypass the traffic generator
stress pattern
DIAG_BYPASS_STRES
S_STAGE
Specifies that the controller/interface bypass the traffic
generator's stress pattern stage. (Stress patterns are meant
to create worst-case signal integrity patterns on the data
pins.) If you do not enable this parameter, the traffic
generator does not assert a pass or fail status until the
generator is configured and signaled to start by its Avalon
configuration interface.
Bypass the user-configured
traffic stage
DIAG_BYPASS_USER_
STAGE
Specifies that the controller/interface bypass the userconfigured traffic generator's pattern after reset. If you do
not enable this parameter, the traffic generator does not
assert a pass or fail status until the generator is configured
and signaled to start by its Avalon configuration interface.
Configuration can be done by connecting to the traffic
generator via the EMIF Debug Toolkit, or by using custom
logic connected to the Avalon-MM configuration slave port
on the traffic generator. Configuration can also be simulated
using the example testbench provided in the
altera_emif_avl_tg_2_tb.sv file.
Run diagnostic on infinite test
duration
DIAG_INFI_TG2_ERR_
TEST
Specifies that the traffic generator run indefinitely until the
first error is detected.
Export Traffic Generator 2.0
configuration interface
DIAG_TG_AVL_2_EXP
ORT_CFG_INTERFACE
Specifies that the IP export an Avalon-MM slave port for
configuring the Traffic Generator. This is required only if you
are configuring the traffic generator through user logic and
not through through the EMIF Debug Toolkit.
Use configurable Avalon traffic
generator 2.0
DIAG_USE_TG_AVL_2
This option allows users to add the new configurable Avalon
traffic generator to the example design.
Table 145.
Group: Diagnostics / Performance
Display Name
Enable Efficiency Monitor
Table 146.
Identifier
DIAG_EFFICIENCY_M
ONITOR
Description
Adds an Efficiency Monitor component to the Avalon-MM
interface of the memory controller, allowing you to view
efficiency statistics of the interface. You can access the
efficiency statistics using the EMIF Debug Toolkit.
Group: Diagnostics / Miscellaneous
Display Name
Identifier
Use short Qsys interface names
SHORT_QSYS_INTERF
ACE_NAMES
Description
Specifies the use of short interface names, for improved
usability and consistency with other Qsys components. If
this parameter is disabled, the names of Qsys interfaces
exposed by the IP will include the type and direction of the
interface. Long interface names are supported for
backward-compatibility and will be removed in a future
release.
7.4.3.9 Arria 10 EMIF IP DDR4 Parameters: Example Designs
External Memory Interface Handbook Volume 2: Design Guidelines
284
7 Implementing and Parameterizing Memory IP
Table 147.
Group: Example Designs / Available Example Designs
Display Name
Select design
Table 148.
Identifier
Description
EX_DESIGN_GUI_DDR
4_SEL_DESIGN
Specifies the creation of a full Quartus Prime project,
instantiating an external memory interface and an example
traffic generator, according to your parameterization. After
the design is created, you can specify the target device and
pin location assignments, run a full compilation, verify
timing closure, and test the interface on your board using
the programming file created by the Quartus Prime
assembler. The 'Generate Example Design' button lets you
generate simulation or synthesis file sets.
Group: Example Designs / Example Design Files
Display Name
Identifier
Description
Simulation
EX_DESIGN_GUI_DDR
4_GEN_SIM
Specifies that the 'Generate Example Design' button create
all necessary file sets for simulation. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, simulation file sets are not created.
Instead, the output directory will contain the ed_sim.qsys
file which holds Qsys details of the simulation example
design, and a make_sim_design.tcl file with other
corresponding tcl files. You can run make_sim_design.tcl
from a command line to generate the simulation example
design. The generated example designs for various
simulators are stored in the /sim sub-directory.
Synthesis
EX_DESIGN_GUI_DDR
4_GEN_SYNTH
Specifies that the 'Generate Example Design' button create
all necessary file sets for synthesis. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, synthesis file sets are not created.
Instead, the output directory will contain the ed_synth.qsys
file which holds Qsys details of the synthesis example
design, and a make_qii_design.tcl script with other
corresponding tcl files. You can run make_qii_design.tcl
from a command line to generate the synthesis example
design. The generated example design is stored in the /qii
sub-directory.
Table 149.
Group: Example Designs / Generated HDL Format
Display Name
Simulation HDL format
Table 150.
Identifier
Description
EX_DESIGN_GUI_DDR
4_HDL_FORMAT
This option lets you choose the format of HDL in which
generated simulation files are created.
Group: Example Designs / Target Development Kit
Display Name
Select board
Identifier
Description
EX_DESIGN_GUI_DDR
4_TARGET_DEV_KIT
Specifies that when you select a development kit with a
memory module, the generated example design contains all
settings and fixed pin assignments to run on the selected
board. You must select a development kit preset to
generate a working example design for the specified
development kit. Any IP settings not applied directly from a
development kit preset will not have guaranteed results
when testing the development kit. To exclude hardware
support of the example design, select 'none' from the
'Select board' pull down menu. When you apply a
development kit preset, all IP parameters are automatically
set appropriately to match the selected preset. If you want
to save your current settings, you should do so before you
apply the preset. You can save your settings under a
different name using File->Save as.
External Memory Interface Handbook Volume 2: Design Guidelines
285
7 Implementing and Parameterizing Memory IP
7.4.3.10 About Memory Presets
Presets help simplify the process of copying memory parameter values from memory
device data sheets to the EMIF parameter editor.
For DDRx protocols, the memory presets are named using the following convention:
PROTOCOL-SPEEDBIN LATENCY FORMAT-AND-TOPOLOGY CAPACITY (INTERNAL-ORGANIZATION)
For example, the preset named DDR4-2666U CL18 Component 1CS 2Gb (512Mb
x 4) refers to a DDR4 x4 component rated at the DDR4-2666U JEDEC speed bin, with
nominal CAS latency of 18 cycles, one chip-select, and a total memory space of 2Gb.
The JEDEC memory specification defines multiple speed bins for a given frequency
(that is, DDR4-2666U and DDR4-2666V). You may be able to determine the exact
speed bin implemented by your memory device using its nominal latency. When in
doubt, contact your memory vendor.
For RLDRAMx and QDRx protocols, the memory presets are named based on the
vendor's device part number.
When the preset list does not contain the exact configuration required, you can still
minimize data entry by selecting the preset closest to your configuration and then
modify parameters as required.
Prior to production you should always review the parameter values to ensure that they
match your memory device data sheet, regardless of whether a preset is used or not.
Incorrect memory parameters can cause functional failures.
7.4.3.11 x4 Mode for Arria 10 External Memory Interface
Non-HPS Arria 10 external memory interfaces support DQ pins-per-DQS group-of-4
(x4 mode) for DDR3 and DDR4 memory protocols.
The following restrictions apply to the use of x4 mode:
Note:
•
The total interface width is limited to 72 bits.
•
You must disable the Enable DM pins option.
•
For DDR4, you must disable the DBI option.
x4 mode is not available for Arria 10 EMIF IP for HPS.
7.4.3.12 Additional Notes About Parameterizing Arria 10 EMIF IP for HPS
Although Arria 10 EMIF IP and Arria 10 EMIF IP for HPS are similar components, there
are some additional requirements necessary in the HPS case.
The following rules and restrictions apply to Arria 10 EMIF IP for HPS:
•
Supported memory protocols are limited to DDR3 and DDR4.
•
The only supported configuration is the hard PHY with the hard memory controller.
•
The maximum memory clock frequency for Arria 10 EMIF IP for HPS may be
different than for regular Arria 10 EMIF IP. Refer to the External Memory Interface
Spec Estimator for details.
•
Only half-rate interfaces are supported.
External Memory Interface Handbook Volume 2: Design Guidelines
286
7 Implementing and Parameterizing Memory IP
•
Sharing of clocks is not supported.
•
The total interface width is limited to a multiple of 16, 24, 40 or 72 bits (with ECC
enabled), or a positive value divisible by the number of DQ pins per DQS group
(with ECC not enabled). For devices other than 10ASXXXKX40, the total interface
width is further limited to a maximum of 40 bits with ECC enabled and 32 bits with
ECC not enabled.
•
Only x8 data groups are supported; that is, DQ pins-per-DQS group must be 8.
•
DM pins must be enabled.
•
The EMIF debug toolkit is not supported.
•
Ping Pong PHY is not supported.
External Memory Interface Handbook Volume 2: Design Guidelines
287
7 Implementing and Parameterizing Memory IP
•
The interface to and from the HPS is a fixed-width conduit.
•
A maximum of 3 address/command I/O lanes are supported. For example:
—
DDR3
•
For component format, maximum number of chip selects is 2.
•
For UDIMM or SODIMM format:
•
•
—
Maximum number of DIMMs is 2, when the number of physical ranks
per DIMM is 1.
—
Maximum number of DIMMs is 1, when the number of physical ranks
per DIMM is 2.
—
Maximum number of physical ranks per DIMMs is 2, when the number
of DIMMs is 1.
For RDIMM format:
—
Maximum number of clocks is 1.
—
Maximum number of DIMMs is 1.
—
Maximum number of physical ranks per DIMM is 2.
LRDIMM memory format is not supported.
DDR4
•
•
•
•
—
For component format:
—
Maximum number of clocks is 1.
—
Maximum number of chip selects is 2
For UDIMM or RDIMM format:
—
Maximum number of clocks is 1.
—
Maximum number of DIMMs is 2, when the number of physical ranks
per DIMM is 1.
—
Maximum number of DIMMs is 1, when the number of physical ranks
per DIMM is 2.
—
Maximum number of physical ranks per DIMM is 2, when the number
of DIMMs is 1.
For SODIMM format:
—
Maximum number of clocks is 1.
—
Maximum number of DIMMs is 1.
—
Maximum number of physical ranks per DIMM is 1.
Arria 10 EMIF IP for HPS also has specific pin-out requirements. For information,
refer to Planning Pin and FPGA Resources.
7.4.4 Arria 10 EMIF IP DDR3 Parameters
The Arria 10 EMIF IP parameter editor allows you to parameterize settings for the
Arria 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
External Memory Interface Handbook Volume 2: Design Guidelines
288
7 Implementing and Parameterizing Memory IP
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
The following tables describe the parameterization settings available in the parameter
editor for the Arria 10 EMIF IP.
7.4.4.1 Arria 10 EMIF IP DDR3 Parameters: General
Table 151.
Group: General / FPGA
Display Name
Speed grade
Table 152.
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
Group: General / Interface
Display Name
Identifier
Description
Configuration
PHY_CONFIG_ENUM
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
Table 153.
Group: General / Clocks
Display Name
Core clocks sharing
Identifier
Description
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
289
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Use recommended PLL
reference clock frequency
PHY_DDR3_DEFAULT_
REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
Table 154.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
PLL_USER_NUM_OF_E
XTRA_CLKS
Description
Specifies the number of additional output clocks to create
from the PLL.
7.4.4.2 Arria 10 EMIF IP DDR3 Parameters: Memory
External Memory Interface Handbook Volume 2: Design Guidelines
290
7 Implementing and Parameterizing Memory IP
Table 155.
Group: Memory / Topology
Display Name
Identifier
Description
DQS group of ALERT#
MEM_DDR3_ALERT_N
_DQS_GROUP
Select the DQS group with which the ALERT# pin is placed.
ALERT# pin placement
MEM_DDR3_ALERT_N
_PLACEMENT_ENUM
Specifies placement for the mem_alert_n signal. If you
select "I/O Lane with Address/Command Pins", you can pick
the I/O lane and pin index in the add/cmd bank with the
subsequent drop down menus. If you select "I/O Lane with
DQS Group", you can specify the DQS group with which to
place the mem_alert_n pin. If you select "Automatically
select a location", the IP automatically selects a pin for the
mem_alert_n signal. If you select this option, no additional
location constraints can be applied to the mem_alert_n pin,
or a fitter error will result during compilation. For optimum
signal integrity, you should choose "I/O Lane with Address/
Command Pins". For interfaces containing multiple memory
devices, it is recommended to connect the ALERT# pins
together to the ALERT#pin on the FPGA.
Bank address width
MEM_DDR3_BANK_AD
DR_WIDTH
Specifies the number of bank address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of bank
address pins needed for access to all available banks.
Number of clocks
MEM_DDR3_CK_WIDT
H
Specifies the number of CK/CK# clock pairs exposed by the
memory interface. Usually more than 1 pair is required for
RDIMM/LRDIMM formats. The value of this parameter
depends on the memory device selected; refer to the data
sheet for your memory device.
Column address width
MEM_DDR3_COL_ADD
R_WIDTH
Specifies the number of column address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of address
pins needed for access to all available columns.
Number of chip selects per
DIMM
MEM_DDR3_CS_PER_
DIMM
Specifies the number of chip selects per DIMM.
Number of chip selects
MEM_DDR3_DISCRET
E_CS_WIDTH
Specifies the total number of chip selects in the interface,
up to a maximum of 4. This parameter applies to discreet
components only.
Enable DM pins
MEM_DDR3_DM_EN
Indicates whether the interface uses data mask (DM) pins.
This feature allows specified portions of the data bus to be
written to memory (not available in x4 mode). One DM pin
exists per DQS group
Number of DQS groups
MEM_DDR3_DQS_WI
DTH
Specifies the total number of DQS groups in the interface.
This value is automatically calculated as the DQ width
divided by the number of DQ pins per DQS group.
DQ pins per DQS group
MEM_DDR3_DQ_PER_
DQS
Specifies the total number of DQ pins per DQS group.
DQ width
MEM_DDR3_DQ_WIDT
H
Specifies the total number of data pins in the interface. The
maximum supported width is 144, or 72 in Ping Pong PHY
mode.
Memory format
MEM_DDR3_FORMAT_
ENUM
Specifies the format of the external memory device. The
following formats are supported: Component - a Discrete
memory device; UDIMM - Unregistered/Unbuffered DIMM
where address/control, clock, and data are unbuffered;
RDIMM - Registered DIMM where address/control and clock
are buffered; LRDIMM - Load Reduction DIMM where
address/control, clock, and data are buffered. LRDIMM
reduces the load to increase memory speed and supports
higher densities than RDIMM; SODIMM - Small Outline
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
291
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
DIMM is similar to UDIMM but smaller in size and is typically
used for systems with limited space. Some memory
protocols may not be available in all formats.
Number of DIMMs
MEM_DDR3_NUM_OF_
DIMMS
Total number of DIMMs.
Number of physical ranks per
DIMM
MEM_DDR3_RANKS_P
ER_DIMM
Number of ranks per DIMM. For LRDIMM, this represents
the number of physical ranks on the DIMM behind the
memory buffer
Number of rank multiplication
pins
MEM_DDR3_RM_WIDT
H
Number of rank multiplication pins used to access all
physical ranks on an LRDIMM. Rank multiplication is a ratio
between the number of physical ranks for an LRDIMM and
the number of logical ranks for the controller. These pins
should be connected to CS#[2] and/or CS#[3] of all
LRDIMMs in the system
Row address width
MEM_DDR3_ROW_AD
DR_WIDTH
Specifies the number of row address pins. Refer to the data
sheet for your memory device. The density of the selected
memory device determines the number of address pins
needed for access to all available rows.
Table 156.
Group: Memory / Latency and Burst
Display Name
Identifier
Description
Memory additive CAS latency
setting
MEM_DDR3_ATCL_EN
UM
Determines the posted CAS additive latency of the memory
device. Enable this feature to improve command and bus
efficiency, and increase system bandwidth.
Burst Length
MEM_DDR3_BL_ENUM
Specifies the DRAM burst length which determines how
many consecutive addresses should be accessed for a given
read/write command.
Read Burst Type
MEM_DDR3_BT_ENUM
Indicates whether accesses within a given burst are in
sequential or interleaved order. Select sequential if you are
using the Intel-provided memory controller.
Memory CAS latency setting
MEM_DDR3_TCL
Specifies the number of clock cycles between the read
command and the availability of the first bit of output data
at the memory device. Overall read latency equals the
additive latency (AL) + the CAS latency (CL). Overall read
latency depends on the memory device selected; refer to
the datasheet for your device.
Memory write CAS latency
setting
MEM_DDR3_WTCL
Specifies the number of clock cycles from the release of
internal write to the latching of the first data in at the
memory device. This value depends on the memory device
selected; refer to the datasheet for your device.
Table 157.
Group: Memory / Mode Register Settings
Display Name
Identifier
Description
Auto self-refresh method
MEM_DDR3_ASR_ENU
M
Indicates whether to enable or disable auto self-refresh.
Auto self-refresh allows the controller to issue self-refresh
requests, rather than manually issuing self-refresh in order
for memory to retain data.
DDR3 LRDIMM additional
control words
MEM_DDR3_LRDIMM_
EXTENDED_CONFIG
Each 4-bit setting can be obtained from the manufacturer's
data sheet and should be entered in hexadecimal, starting
with BC0F on the left and ending with BC00 on the right
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
292
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
DLL precharge power down
MEM_DDR3_PD_ENUM
Specifies whether the DLL in the memory device is off or on
during precharge power-down
DDR3 RDIMM/LRDIMM control
words
MEM_DDR3_RDIMM_C
ONFIG
Each 4-bit/8-bit setting can be obtained from the
manufacturer's data sheet and should be entered in
hexadecimal, starting with the 8-bit setting RCBx on the left
and continuing to RC1x followed by the 4-bit setting RCOF
and ending with RC00 on the right
Self-refresh temperature
MEM_DDR3_SRT_ENU
M
Specifies the self-refresh temperature as "Normal" or
"Extended" mode. More information on Normal and
Extended temperature modes can be found in the memory
device datasheet.
7.4.4.3 Arria 10 EMIF IP DDR3 Parameters: Mem I/O
Table 158.
Group: Mem I/O / Memory I/O Settings
Display Name
Identifier
Description
Output drive strength setting
MEM_DDR3_DRV_STR
_ENUM
Specifies the output driver impedance setting at the
memory device. To obtain optimum signal integrity
performance, select option based on board simulation
results.
ODT Rtt nominal value
MEM_DDR3_RTT_NOM
_ENUM
Determines the nominal on-die termination value applied to
the DRAM. The termination is applied any time that ODT is
asserted. If you specify a different value for RTT_WR, that
value takes precedence over the values mentioned here. For
optimum signal integrity performance, select your option
based on board simulation results.
Dynamic ODT (Rtt_WR) value
MEM_DDR3_RTT_WR_
ENUM
Specifies the mode of the dynamic on-die termination (ODT)
during writes to the memory device (used for multi-rank
configurations). For optimum signal integrity performance,
select this option based on board simulation results.
Table 159.
Group: Mem I/O / ODT Activation
Display Name
Use Default ODT Assertion
Tables
Identifier
Description
MEM_DDR3_USE_DEF
AULT_ODT
Enables the default ODT assertion pattern as determined
from vendor guidelines. These settings are provided as a
default only; you should simulate your memory interface to
determine the optimal ODT settings and assertion patterns.
7.4.4.4 Arria 10 EMIF IP DDR3 Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
Table 160.
Group: FPGA IO / FPGA IO Settings
Display Name
Use default I/O settings
Identifier
Description
PHY_DDR3_DEFAULT_
IO
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
293
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_DDR3_IO_VOLTA
GE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
Table 161.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_DDR3_USER_AC
_IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_DDR3_USER_AC
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR3_USER_AC
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 162.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_DDR3_USER_CK
_IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_DDR3_USER_CK
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR3_USER_CK
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
External Memory Interface Handbook Volume 2: Design Guidelines
294
7 Implementing and Parameterizing Memory IP
Table 163.
Group: FPGA IO / Data Bus
Display Name
Identifier
Use recommended initial Vrefin
PHY_DDR3_USER_AU
TO_STARTING_VREFI
N_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_DDR3_USER_DA
TA_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_DDR3_USER_DA
TA_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_DDR3_USER_DA
TA_OUT_MODE_ENUM
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_DDR3_USER_STA
RTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 164.
Description
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_DDR3_USER_PLL
_REF_CLK_IO_STD_E
NUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_DDR3_USER_RZ
Q_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
7.4.4.5 Arria 10 EMIF IP DDR3 Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 165.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Speed bin
MEM_DDR3_SPEEDBI
N_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
tDH (base) DC level
MEM_DDR3_TDH_DC_
MV
tDH (base) DC level refers to the voltage level which the
data bus must not cross during the hold window. The signal
is considered stable only if it remains above this voltage
level (for a logic 1) or below this voltage level (for a logic 0)
for the entire hold period.
tDH (base)
MEM_DDR3_TDH_PS
tDH (base) refers to the hold time for the Data (DQ) bus
after the rising edge of CK.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
295
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tDQSCK
MEM_DDR3_TDQSCK_
PS
tDQSCK describes the skew between the memory clock (CK)
and the input data strobes (DQS) used for reads. It is the
time between the rising data strobe edge (DQS, DQS#)
relative to the rising CK edge.
tDQSQ
MEM_DDR3_TDQSQ_P
S
tDQSQ describes the latest valid transition of the associated
DQ pins for a READ. tDQSQ specifically refers to the DQS,
DQS# to DQ skew. It is the length of time between the
DQS, DQS# crossing to the last valid transition of the
slowest DQ pin in the DQ group associated with that DQS
strobe.
tDQSS
MEM_DDR3_TDQSS_C
YC
tDQSS describes the skew between the memory clock (CK)
and the output data strobes used for writes. It is the time
between the rising data strobe edge (DQS, DQS#) relative
to the rising CK edge.
tDSH
MEM_DDR3_TDSH_CY
C
tDSH specifies the write DQS hold time. This is the time
difference between the rising CK edge and the falling edge
of DQS, measured as a percentage of tCK.
tDSS
MEM_DDR3_TDSS_CY
C
tDSS describes the time between the falling edge of DQS to
the rising edge of the next CK transition.
tDS (base) AC level
MEM_DDR3_TDS_AC_
MV
tDS (base) AC level refers to the voltage level which the
data bus must cross and remain above during the setup
margin window. The signal is considered stable only if it
remains above this voltage level (for a logic 1) or below this
voltage level (for a logic 0) for the entire setup period.
tDS (base)
MEM_DDR3_TDS_PS
tDS(base) refers to the setup time for the Data (DQ) bus
before the rising edge of the DQS strobe.
tIH (base) DC level
MEM_DDR3_TIH_DC_
MV
tIH (base) DC level refers to the voltage level which the
address/command signal must not cross during the hold
window. The signal is considered stable only if it remains
above this voltage level (for a logic 1) or below this voltage
level (for a logic 0) for the entire hold period.
tIH (base)
MEM_DDR3_TIH_PS
tIH (base) refers to the hold time for the Address/Command
(A) bus after the rising edge of CK. Depending on what AC
level the user has chosen for a design, the hold margin can
vary (this variance will be automatically determined when
the user choses the "tIH (base) AC level").
tINIT
MEM_DDR3_TINIT_US
tINIT describes the time duration of the memory
initialization after a device power-up. After RESET_n is deasserted, wait for another 500us until CKE becomes active.
During this time, the DRAM starts internal initialization; this
happens independently of external clocks.
tIS (base) AC level
MEM_DDR3_TIS_AC_
MV
tIS (base) AC level refers to the voltage level which the
address/command signal must cross and remain above
during the setup margin window. The signal is considered
stable only if it remains above this voltage level (for a logic
1) or below this voltage level (for a logic 0) for the entire
setup period.
tIS (base)
MEM_DDR3_TIS_PS
tIS (base) refers to the setup time for the Address/
Command/Control (A) bus to the rising edge of CK.
tMRD
MEM_DDR3_TMRD_CK
_CYC
The mode register set command cycle time, tMRD is the
minimum time period required between two MRS
commands.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
296
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tQH
MEM_DDR3_TQH_CYC
tQH specifies the output hold time for the DQ in relation to
DQS, DQS#. It is the length of time between the DQS,
DQS# crossing to the earliest invalid transition of the
fastest DQ pin in the DQ group associated with that DQS
strobe.
tQSH
MEM_DDR3_TQSH_CY
C
tQSH refers to the differential High Pulse Width, which is
measured as a percentage of tCK. It is the time during
which the DQS is high for a read.
tRAS
MEM_DDR3_TRAS_NS
tRAS describes the activate to precharge duration. A row
cannot be deactivated until the tRAS time has been met.
Therefore tRAS determines how long the memory has to
wait after a activate command before a precharge command
can be issued to close the row.
tRCD
MEM_DDR3_TRCD_NS
tRCD, row command delay, describes the amount of delay
between the activation of a row through the RAS command
and the access to the data through the CAS command.
tRP
MEM_DDR3_TRP_NS
tRP refers to the Precharge (PRE) command period. It
describes how long it takes for the memory to disable
access to a row by precharging and before it is ready to
activate a different row.
tWLH
MEM_DDR3_TWLH_PS
tWLH describes the write leveling hold time from the rising
edge of DQS to the rising edge of CK.
tWLS
MEM_DDR3_TWLS_PS
tWLS describes the write leveling setup time. It is measured
from the rising edge of CK to the rising edge of DQS.
tWR
MEM_DDR3_TWR_NS
tWR refers to the Write Recovery time. It specifies the
amount of clock cycles needed to complete a write before a
precharge command can be issued.
Table 166.
Group: Mem Timing / Parameters dependent on Speed Bin, Operating
Frequency, and Page Size
Display Name
Identifier
Description
tFAW
MEM_DDR3_TFAW_NS
tFAW refers to the four activate window time. It describes
the period of time during which only four banks can be
active.
tRRD
MEM_DDR3_TRRD_CY
C
tRRD refers to the Row Active to Row Active Delay. It is the
minimum time interval (measured in memory clock cycles)
between two activate commands to rows in different banks
in the same rank
tRTP
MEM_DDR3_TRTP_CY
C
tRTP refers to the internal READ Command to PRECHARGE
Command delay. It is the number of memory clock cycles
that is needed between a read command and a precharge
command to the same rank.
tWTR
MEM_DDR3_TWTR_CY
C
tWTR or Write Timing Parameter describes the delay from
start of internal write transaction to internal read command,
for accesses to the same bank. The delay is measured from
the first rising memory clock edge after the last write data
is received to the rising memory clock edge when a read
command is received.
External Memory Interface Handbook Volume 2: Design Guidelines
297
7 Implementing and Parameterizing Memory IP
Table 167.
Group: Mem Timing / Parameters dependent on Density and Temperature
Display Name
Identifier
Description
tREFI
MEM_DDR3_TREFI_US
tREFI refers to the average periodic refresh interval. It is
the maximum amount of time the memory can tolerate in
between each refresh command
tRFC
MEM_DDR3_TRFC_NS
tRFC refers to the Refresh Cycle Time. It is the amount of
delay after a refresh command before an activate command
can be accepted by the memory. This parameter is
dependent on the memory density and is necessary for
proper hardware functionality.
7.4.4.6 Arria 10 EMIF IP DDR3 Parameters: Board
Table 168.
Group: Board / Intersymbol Interference/Crosstalk
Display Name
Identifier
Description
Address and command ISI/
crosstalk
BOARD_DDR3_USER_
AC_ISI_NS
The address and command window reduction due to ISI and
crosstalk effects. The number to be entered is the total loss
of margin on both the setup and hold sides (measured loss
on the setup side + measured loss on the hold side). Refer
to the EMIF Simulation Guidance wiki page for additional
information.
Read DQS/DQS# ISI/crosstalk
BOARD_DDR3_USER_
RCLK_ISI_NS
The reduction of the read data window due to ISI and
crosstalk effects on the DQS/DQS# signal when driven by
the memory device during a read. The number to be
entered is the total loss of margin on the setup and hold
sides (measured loss on the setup side + measured loss on
the hold side). Refer to the EMIF Simulation Guidance wiki
page for additional information.
Read DQ ISI/crosstalk
BOARD_DDR3_USER_
RDATA_ISI_NS
The reduction of the read data window due to ISI and
crosstalk effects on the DQ signal when driven by the
memory device during a read. The number to be entered is
the total loss of margin on the setup and hold side
(measured loss on the setup side + measured loss on the
hold side). Refer to the EMIF Simulation Guidance wiki page
for additional information.
Write DQS/DQS# ISI/crosstalk
BOARD_DDR3_USER_
WCLK_ISI_NS
The reduction of the write data window due to ISI and
crosstalk effects on the DQS/DQS# signal when driven by
the FPGA during a write. The number to be entered is the
total loss of margin on the setup and hold sides (measured
loss on the setup side + measured loss on the hold side).
Refer to the EMIF Simulation Guidance wiki page for
additional information.
Write DQ ISI/crosstalk
BOARD_DDR3_USER_
WDATA_ISI_NS
The reduction of the write data window due to ISI and
crosstalk effects on the DQ signal when driven by the FPGA
during a write. The number to be entered is the total loss of
margin on the setup and hold sides (measured loss on the
setup side + measured loss on the hold side). Refer to the
EMIF Simulation Guidance wiki page for additional
information.
Use default ISI/crosstalk
values
BOARD_DDR3_USE_D
EFAULT_ISI_VALUES
You can enable this option to use default intersymbol
interference and crosstalk values for your topology. Note
that the default values are not optimized for your board. For
optimal signal integrity, it is recommended that you do not
enable this parameter, but instead perform I/O simulation
using IBIS models and Hyperlynx*, and manually enter
values based on your simulation results, instead of using
the default values.
External Memory Interface Handbook Volume 2: Design Guidelines
298
7 Implementing and Parameterizing Memory IP
Table 169.
Group: Board / Board and Package Skews
Display Name
Identifier
Description
Average delay difference
between address/command
and CK
BOARD_DDR3_AC_TO
_CK_SKEW_NS
The average delay difference between the address/
command signals and the CK signal, calculated by averaging
the longest and smallest address/command signal trace
delay minus the maximum CK trace delay. Positive values
represent address and command signals that are longer
than CK signals and negative values represent address and
command signals that are shorter than CK signals.
Maximum board skew within
DQS group
BOARD_DDR3_BRD_S
KEW_WITHIN_DQS_N
S
The largest skew between all DQ and DM pins in a DQS
group. This value affects the read capture and write
margins.
Average delay difference
between DQS and CK
BOARD_DDR3_DQS_T
O_CK_SKEW_NS
The average delay difference between the DQS signals and
the CK signal, calculated by averaging the longest and
smallest DQS trace delay minus the CK trace delay. Positive
values represent DQS signals that are longer than CK
signals and negative values represent DQS signals that are
shorter than CK signals.
Package deskewed with board
layout (address/command
bus)
BOARD_DDR3_IS_SK
EW_WITHIN_AC_DES
KEWED
Enable this parameter if you are compensating for package
skew on the address, command, control, and memory clock
buses in the board layout. Include package skew in
calculating the following board skew parameters.
Package deskewed with board
layout (DQS group)
BOARD_DDR3_IS_SK
EW_WITHIN_DQS_DE
SKEWED
Enable this parameter if you are compensating for package
skew on the DQ, DQS, and DM buses in the board layout.
Include package skew in calculating the following board
skew parameters.
Maximum CK delay to DIMM/
device
BOARD_DDR3_MAX_C
K_DELAY_NS
The delay of the longest CK trace from the FPGA to any
DIMM/device.
Maximum DQS delay to DIMM/
device
BOARD_DDR3_MAX_D
QS_DELAY_NS
The delay of the longest DQS trace from the FPGA to any
DIMM/device
Maximum system skew within
address/command bus
BOARD_DDR3_PKG_B
RD_SKEW_WITHIN_A
C_NS
Maximum system skew within address/command bus refers
to the largest skew between the address and command
signals.
Maximum delay difference
between DIMMs/devices
BOARD_DDR3_SKEW_
BETWEEN_DIMMS_NS
The largest propagation delay on DQ signals between ranks
(applicable only when there is more than one rank). For
example: when you configure two ranks using one DIMM
there is a short distance between the ranks for the same DQ
pin; when you implement two ranks using two DIMMs the
distance is larger.
Maximum skew between DQS
groups
BOARD_DDR3_SKEW_
BETWEEN_DQS_NS
The largest skew between DQS signals.
7.4.4.7 Arria 10 EMIF IP DDR3 Parameters: Controller
External Memory Interface Handbook Volume 2: Design Guidelines
299
7 Implementing and Parameterizing Memory IP
Table 170.
Group: Controller / Low Power Mode
Display Name
Identifier
Description
Auto Power-Down Cycles
CTRL_DDR3_AUTO_P
OWER_DOWN_CYCS
Specifies the number of idle controller cycles after which the
memory device is placed into power-down mode. You can
configure the idle waiting time. The supported range for
number of cycles is from 1 to 65534.
Enable Auto Power-Down
CTRL_DDR3_AUTO_P
OWER_DOWN_EN
Enable this parameter to have the controller automatically
place the memory device into power-down mode after a
specified number of idle controller clock cycles. The idle wait
time is configurable. All ranks must be idle to enter auto
power-down.
Table 171.
Group: Controller / Efficiency
Display Name
Identifier
Description
Address Ordering
CTRL_DDR3_ADDR_O
RDER_ENUM
Controls the mapping between Avalon addresses and
memory device addresses. By changing the value of this
parameter, you can change the mappings between the
Avalon-MM address and the DRAM address.
Enable Auto-Precharge Control
CTRL_DDR3_AUTO_PR
ECHARGE_EN
Select this parameter to enable the auto-precharge control
on the controller top level. If you assert the auto-precharge
control signal while requesting a read or write burst, you
can specify whether the controller should close (autoprecharge) the currently open page at the end of the read
or write burst, potentially making a future access to a
different page of the same bank faster.
Enable Reordering
CTRL_DDR3_REORDE
R_EN
Enable this parameter to allow the controller to perform
command and data reordering. Reordering can improve
efficiency by reducing bus turnaround time and row/bank
switching time. Data reordering allows the single-port
memory controller to change the order of read and write
commands to achieve highest efficiency. Command
reordering allows the controller to issue bank management
commands early based on incoming patterns, so that the
desired row in memory is already open when the command
reaches the memory interface. For more information, refer
to the Data Reordering topic in the EMIF Handbook.
Starvation limit for each
command
CTRL_DDR3_STARVE_
LIMIT
Specifies the number of commands that can be served
before a waiting command is served. The controller employs
a counter to ensure that all requests are served after a predefined interval -- this ensures that low priority requests are
not ignored, when doing data reordering for efficiency. The
valid range for this parameter is from 1 to 63. For more
information, refer to the Starvation Control topic in the EMIF
Handbook.
Enable Command Priority
Control
CTRL_DDR3_USER_PR
IORITY_EN
Select this parameter to enable user-requested command
priority control on the controller top level. This parameter
instructs the controller to treat a read or write request as
high-priority. The controller attempts to fill high-priority
requests sooner, to reduce latency. Connect this interface to
the conduit of your logic block that determines when the
external memory interface IP treats the read or write
request as a high-priority command.
External Memory Interface Handbook Volume 2: Design Guidelines
300
7 Implementing and Parameterizing Memory IP
Table 172.
Group: Controller / Configuration, Status, and Error Handling
Display Name
Identifier
Description
Enable Auto Error Correction
CTRL_DDR3_ECC_AUT
O_CORRECTION_EN
Specifies that the controller perform auto correction when a
single-bit error is detected by the ECC logic.
Enable Error Detection and
Correction Logic with ECC
CTRL_DDR3_ECC_EN
Enables error-correction code (ECC) for single-bit error
correction and double-bit error detection. Your memory
interface must have a width of 16, 24, 40, or 72 bits to use
ECC. ECC is implemented as soft logic.
Enable Memory-Mapped
Configuration and Status
Register (MMR) Interface
CTRL_DDR3_MMR_EN
Enable this parameter to change or read memory timing
parameters, memory address size, mode register settings,
controller status, and request sideband operations.
Table 173.
Group: Controller / Data Bus Turnaround Time
Display Name
Identifier
Description
Additional read-to-read
turnaround time (different
ranks)
CTRL_DDR3_RD_TO_
RD_DIFF_CHIP_DELTA
_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a read of another logical rank. This can
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (different
ranks)
CTRL_DDR3_RD_TO_
WR_DIFF_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (same rank)
CTRL_DDR3_RD_TO_
WR_SAME_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read to a write
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
Additional write-to-read
turnaround time (different
ranks)
CTRL_DDR3_WR_TO_
RD_DIFF_CHIP_DELTA
_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a read of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional write-to-read
turnaround time (same rank)
CTRL_DDR3_WR_TO_
RD_SAME_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write to a read
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
Additional write-to-write
turnaround time (different
ranks)
CTRL_DDR3_WR_TO_
WR_DIFF_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
External Memory Interface Handbook Volume 2: Design Guidelines
301
7 Implementing and Parameterizing Memory IP
7.4.4.8 Arria 10 EMIF IP DDR3 Parameters: Diagnostics
Table 174.
Group: Diagnostics / Simulation Options
Display Name
Identifier
Description
Abstract phy for fast simulation
DIAG_DDR3_ABSTRA
CT_PHY
Specifies that the system use Abstract PHY for simulation.
Abstract PHY replaces the PHY with a model for fast
simulation and can reduce simulation time by 2-3 times.
Abstract PHY is available for certain protocols and device
families, and only when you select Skip Calibration.
Calibration mode
DIAG_SIM_CAL_MODE
_ENUM
Specifies whether to skip memory interface calibration
during simulation, or to simulate the full calibration process.
Simulating the full calibration process can take hours (or
even days), depending on the width and depth of the
memory interface. You can achieve much faster simulation
times by skipping the calibration process, but that is only
expected to work when the memory model is ideal and the
interconnect delays are zero. If you enable this parameter,
the interface still performs some memory initialization
before starting normal operations. Abstract PHY is
supported with skip calibration.
Table 175.
Group: Diagnostics / Calibration Debug Options
Display Name
Identifier
Description
Enable Daisy-Chaining for
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_MASTER
Specifies that the IP export an Avalon-MM master interface
(cal_debug_out) which can connect to the cal_debug
interface of other EMIF cores residing in the same I/O
column. This parameter applies only if the EMIF Debug
Toolkit or On-Chip Debug Port is enabled. Refer to the
Debugging Multiple EMIFs wiki page for more information
about debugging multiple EMIFs.
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_SLAVE
Specifies the connectivity of an Avalon slave interface for
use by the Quartus Prime EMIF Debug Toolkit or user core
logic. If you set this parameter to "Disabled," no debug
features are enabled. If you set this parameter to "Export,"
an Avalon slave interface named "cal_debug" is exported
from the IP. To use this interface with the EMIF Debug
Toolkit, you must instantiate and connect an EMIF debug
interface IP core to it, or connect it to the cal_debug_out
interface of another EMIF core. If you select "Add EMIF
Debug Interface", an EMIF debug interface component
containing a JTAG Avalon Master is connected to the debug
port, allowing the core to be accessed by the EMIF Debug
Toolkit. Only one EMIF debug interface should be
instantiated per I/O column. You can chain additional EMIF
or PHYLite cores to the first by enabling the "Enable DaisyChaining for Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option for all cores in the chain, and selecting
"Export" for the "Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option on all cores after the first.
Interface ID
DIAG_INTERFACE_ID
Identifies interfaces within the I/O column, for use by the
EMIF Debug Toolkit and the On-Chip Debug Port. Interface
IDs should be unique among EMIF cores within the same
I/O column. If the Quartus Prime EMIF Debug Toolkit/OnChip Debug Port parameter is set to Disabled, the interface
ID is unused.
Use Soft NIOS Processor for
On-Chip Debug
DIAG_SOFT_NIOS_M
ODE
Enables a soft Nios processor as a peripheral component to
access the On-Chip Debug Port. Only one interface in a
column can activate this option.
External Memory Interface Handbook Volume 2: Design Guidelines
302
7 Implementing and Parameterizing Memory IP
Table 176.
Group: Diagnostics / Example Design
Display Name
Identifier
Description
Enable In-System-Sourcesand-Probes
DIAG_EX_DESIGN_IS
SP_EN
Enables In-System-Sources-and-Probes in the example
design for common debug signals, such as calibration status
or example traffic generator per-bit status. This parameter
must be enabled if you want to do driver margining.
Number of core clocks sharing
slaves to instantiate in the
example design
DIAG_EX_DESIGN_NU
M_OF_SLAVES
Specifies the number of core clock sharing slaves to
instantiate in the example design. This parameter applies
only if you set the "Core clocks sharing" parameter in the
"General" tab to Master or Slave.
Table 177.
Group: Diagnostics / Traffic Generator
Display Name
Identifier
Description
Bypass the default traffic
pattern
DIAG_BYPASS_DEFAU
LT_PATTERN
Specifies that the controller/interface bypass the traffic
generator 2.0 default pattern after reset. If you do not
enable this parameter, the traffic generator does not assert
a pass or fail status until the generator is configured and
signaled to start by its Avalon configuration interface.
Bypass the traffic generator
repeated-writes/repeatedreads test pattern
DIAG_BYPASS_REPEA
T_STAGE
Specifies that the controller/interface bypass the traffic
generator's repeat test stage. If you do not enable this
parameter, every write and read is repeated several times.
Bypass the traffic generator
stress pattern
DIAG_BYPASS_STRES
S_STAGE
Specifies that the controller/interface bypass the traffic
generator's stress pattern stage. (Stress patterns are meant
to create worst-case signal integrity patterns on the data
pins.) If you do not enable this parameter, the traffic
generator does not assert a pass or fail status until the
generator is configured and signaled to start by its Avalon
configuration interface.
Bypass the user-configured
traffic stage
DIAG_BYPASS_USER_
STAGE
Specifies that the controller/interface bypass the userconfigured traffic generator's pattern after reset. If you do
not enable this parameter, the traffic generator does not
assert a pass or fail status until the generator is configured
and signaled to start by its Avalon configuration interface.
Configuration can be done by connecting to the traffic
generator via the EMIF Debug Toolkit, or by using custom
logic connected to the Avalon-MM configuration slave port
on the traffic generator. Configuration can also be simulated
using the example testbench provided in the
altera_emif_avl_tg_2_tb.sv file.
Run diagnostic on infinite test
duration
DIAG_INFI_TG2_ERR_
TEST
Specifies that the traffic generator run indefinitely until the
first error is detected.
Export Traffic Generator 2.0
configuration interface
DIAG_TG_AVL_2_EXP
ORT_CFG_INTERFACE
Specifies that the IP export an Avalon-MM slave port for
configuring the Traffic Generator. This is required only if you
are configuring the traffic generator through user logic and
not through through the EMIF Debug Toolkit.
Use configurable Avalon traffic
generator 2.0
DIAG_USE_TG_AVL_2
This option allows users to add the new configurable Avalon
traffic generator to the example design.
Table 178.
Group: Diagnostics / Performance
Display Name
Enable Efficiency Monitor
Identifier
Description
DIAG_EFFICIENCY_M
ONITOR
Adds an Efficiency Monitor component to the Avalon-MM
interface of the memory controller, allowing you to view
efficiency statistics of the interface. You can access the
efficiency statistics using the EMIF Debug Toolkit.
External Memory Interface Handbook Volume 2: Design Guidelines
303
7 Implementing and Parameterizing Memory IP
Table 179.
Group: Diagnostics / Miscellaneous
Display Name
Identifier
Use short Qsys interface names
SHORT_QSYS_INTERF
ACE_NAMES
Description
Specifies the use of short interface names, for improved
usability and consistency with other Qsys components. If
this parameter is disabled, the names of Qsys interfaces
exposed by the IP will include the type and direction of the
interface. Long interface names are supported for
backward-compatibility and will be removed in a future
release.
7.4.4.9 Arria 10 EMIF IP DDR3 Parameters: Example Designs
Table 180.
Group: Example Designs / Available Example Designs
Display Name
Select design
Table 181.
Identifier
Description
EX_DESIGN_GUI_DDR
3_SEL_DESIGN
Specifies the creation of a full Quartus Prime project,
instantiating an external memory interface and an example
traffic generator, according to your parameterization. After
the design is created, you can specify the target device and
pin location assignments, run a full compilation, verify
timing closure, and test the interface on your board using
the programming file created by the Quartus Prime
assembler. The 'Generate Example Design' button lets you
generate simulation or synthesis file sets.
Group: Example Designs / Example Design Files
Display Name
Identifier
Description
Simulation
EX_DESIGN_GUI_DDR
3_GEN_SIM
Specifies that the 'Generate Example Design' button create
all necessary file sets for simulation. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, simulation file sets are not created.
Instead, the output directory will contain the ed_sim.qsys
file which holds Qsys details of the simulation example
design, and a make_sim_design.tcl file with other
corresponding tcl files. You can run make_sim_design.tcl
from a command line to generate the simulation example
design. The generated example designs for various
simulators are stored in the /sim sub-directory.
Synthesis
EX_DESIGN_GUI_DDR
3_GEN_SYNTH
Specifies that the 'Generate Example Design' button create
all necessary file sets for synthesis. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, synthesis file sets are not created.
Instead, the output directory will contain the ed_synth.qsys
file which holds Qsys details of the synthesis example
design, and a make_qii_design.tcl script with other
corresponding tcl files. You can run make_qii_design.tcl
from a command line to generate the synthesis example
design. The generated example design is stored in the /qii
sub-directory.
Table 182.
Group: Example Designs / Generated HDL Format
Display Name
Simulation HDL format
Identifier
EX_DESIGN_GUI_DDR
3_HDL_FORMAT
External Memory Interface Handbook Volume 2: Design Guidelines
304
Description
This option lets you choose the format of HDL in which
generated simulation files are created.
7 Implementing and Parameterizing Memory IP
Table 183.
Group: Example Designs / Target Development Kit
Display Name
Select board
Identifier
Description
EX_DESIGN_GUI_DDR
3_TARGET_DEV_KIT
Specifies that when you select a development kit with a
memory module, the generated example design contains all
settings and fixed pin assignments to run on the selected
board. You must select a development kit preset to
generate a working example design for the specified
development kit. Any IP settings not applied directly from a
development kit preset will not have guaranteed results
when testing the development kit. To exclude hardware
support of the example design, select 'none' from the
'Select board' pull down menu. When you apply a
development kit preset, all IP parameters are automatically
set appropriately to match the selected preset. If you want
to save your current settings, you should do so before you
apply the preset. You can save your settings under a
different name using File->Save as.
7.4.4.10 About Memory Presets
Presets help simplify the process of copying memory parameter values from memory
device data sheets to the EMIF parameter editor.
For DDRx protocols, the memory presets are named using the following convention:
PROTOCOL-SPEEDBIN LATENCY FORMAT-AND-TOPOLOGY CAPACITY (INTERNAL-ORGANIZATION)
For example, the preset named DDR4-2666U CL18 Component 1CS 2Gb (512Mb
x 4) refers to a DDR4 x4 component rated at the DDR4-2666U JEDEC speed bin, with
nominal CAS latency of 18 cycles, one chip-select, and a total memory space of 2Gb.
The JEDEC memory specification defines multiple speed bins for a given frequency
(that is, DDR4-2666U and DDR4-2666V). You may be able to determine the exact
speed bin implemented by your memory device using its nominal latency. When in
doubt, contact your memory vendor.
For RLDRAMx and QDRx protocols, the memory presets are named based on the
vendor's device part number.
When the preset list does not contain the exact configuration required, you can still
minimize data entry by selecting the preset closest to your configuration and then
modify parameters as required.
Prior to production you should always review the parameter values to ensure that they
match your memory device data sheet, regardless of whether a preset is used or not.
Incorrect memory parameters can cause functional failures.
7.4.4.11 x4 Mode for Arria 10 External Memory Interface
Non-HPS Arria 10 external memory interfaces support DQ pins-per-DQS group-of-4
(x4 mode) for DDR3 and DDR4 memory protocols.
The following restrictions apply to the use of x4 mode:
•
The total interface width is limited to 72 bits.
•
You must disable the Enable DM pins option.
•
For DDR4, you must disable the DBI option.
External Memory Interface Handbook Volume 2: Design Guidelines
305
7 Implementing and Parameterizing Memory IP
Note:
x4 mode is not available for Arria 10 EMIF IP for HPS.
7.4.4.12 Additional Notes About Parameterizing Arria 10 EMIF IP for HPS
Although Arria 10 EMIF IP and Arria 10 EMIF IP for HPS are similar components, there
are some additional requirements necessary in the HPS case.
The following rules and restrictions apply to Arria 10 EMIF IP for HPS:
•
Supported memory protocols are limited to DDR3 and DDR4.
•
The only supported configuration is the hard PHY with the hard memory controller.
•
The maximum memory clock frequency for Arria 10 EMIF IP for HPS may be
different than for regular Arria 10 EMIF IP. Refer to the External Memory Interface
Spec Estimator for details.
•
Only half-rate interfaces are supported.
•
Sharing of clocks is not supported.
•
The total interface width is limited to a multiple of 16, 24, 40 or 72 bits (with ECC
enabled), or a positive value divisible by the number of DQ pins per DQS group
(with ECC not enabled). For devices other than 10ASXXXKX40, the total interface
width is further limited to a maximum of 40 bits with ECC enabled and 32 bits with
ECC not enabled.
•
Only x8 data groups are supported; that is, DQ pins-per-DQS group must be 8.
•
DM pins must be enabled.
•
The EMIF debug toolkit is not supported.
•
Ping Pong PHY is not supported.
External Memory Interface Handbook Volume 2: Design Guidelines
306
7 Implementing and Parameterizing Memory IP
•
The interface to and from the HPS is a fixed-width conduit.
•
A maximum of 3 address/command I/O lanes are supported. For example:
—
DDR3
•
For component format, maximum number of chip selects is 2.
•
For UDIMM or SODIMM format:
•
•
—
Maximum number of DIMMs is 2, when the number of physical ranks
per DIMM is 1.
—
Maximum number of DIMMs is 1, when the number of physical ranks
per DIMM is 2.
—
Maximum number of physical ranks per DIMMs is 2, when the number
of DIMMs is 1.
For RDIMM format:
—
Maximum number of clocks is 1.
—
Maximum number of DIMMs is 1.
—
Maximum number of physical ranks per DIMM is 2.
LRDIMM memory format is not supported.
DDR4
•
•
•
•
—
For component format:
—
Maximum number of clocks is 1.
—
Maximum number of chip selects is 2
For UDIMM or RDIMM format:
—
Maximum number of clocks is 1.
—
Maximum number of DIMMs is 2, when the number of physical ranks
per DIMM is 1.
—
Maximum number of DIMMs is 1, when the number of physical ranks
per DIMM is 2.
—
Maximum number of physical ranks per DIMM is 2, when the number
of DIMMs is 1.
For SODIMM format:
—
Maximum number of clocks is 1.
—
Maximum number of DIMMs is 1.
—
Maximum number of physical ranks per DIMM is 1.
Arria 10 EMIF IP for HPS also has specific pin-out requirements. For information,
refer to Planning Pin and FPGA Resources.
7.4.5 Arria 10 EMIF IP LPDDR3 Parameters
The Arria 10 EMIF IP parameter editor allows you to parameterize settings for the
Arria 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
External Memory Interface Handbook Volume 2: Design Guidelines
307
7 Implementing and Parameterizing Memory IP
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
The following tables describe the parameterization settings available in the parameter
editor for the Arria 10 EMIF IP.
7.4.5.1 Arria 10 EMIF IP LPDDR3 Parameters: General
Table 184.
Group: General / FPGA
Display Name
Speed grade
Table 185.
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
Group: General / Interface
Display Name
Identifier
Description
Configuration
PHY_CONFIG_ENUM
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
Table 186.
Group: General / Clocks
Display Name
Core clocks sharing
Identifier
Description
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
308
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Use recommended PLL
reference clock frequency
PHY_LPDDR3_DEFAUL
T_REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
Table 187.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
Description
PLL_USER_NUM_OF_E
XTRA_CLKS
Specifies the number of additional output clocks to create
from the PLL.
7.4.5.2 Arria 10 EMIF IP LPDDR3 Parameters: Memory
External Memory Interface Handbook Volume 2: Design Guidelines
309
7 Implementing and Parameterizing Memory IP
Table 188.
Group: Memory / Topology
Display Name
Identifier
Description
Bank address width
MEM_LPDDR3_BANK_
ADDR_WIDTH
The number of bank address bits.
Number of clocks
MEM_LPDDR3_CK_WI
DTH
Number of CK/CK# clock pairs exposed by the memory
interface.
Column address width
MEM_LPDDR3_COL_A
DDR_WIDTH
The number of column address bits.
Number of chip selects
MEM_LPDDR3_DISCR
ETE_CS_WIDTH
Total number of chip selects in the interface.
Enable DM pins
MEM_LPDDR3_DM_EN
Indicates whether interface uses data mask (DM) pins. This
feature allows specified portions of the data bus to be
written to memory (not available in x4 mode). One DM pin
exists per DQS group
Number of DQS groups
MEM_LPDDR3_DQS_W
IDTH
Specifies the total number of DQS groups in the interface.
This value is automatically calculated as the DQ width
divided by the number of DQ pins per DQS group.
DQ width
MEM_LPDDR3_DQ_WI
DTH
Total number of DQ pins in the interface.
Row address width
MEM_LPDDR3_ROW_A
DDR_WIDTH
The number of row address bits.
Table 189.
Group: Memory / Latency and Burst
Display Name
Identifier
Description
Burst length
MEM_LPDDR3_BL
Burst length of the memory device.
Data latency
MEM_LPDDR3_DATA_L
ATENCY
Determines the mode register setting that controls the data
latency. Sets both READ and WRITE latency (RL and WL).
DQ ODT
MEM_LPDDR3_DQODT
The ODT setting for the DQ pins during writes.
Power down ODT
MEM_LPDDR3_PDODT
Turn on turn off ODT during power down.
WL set
MEM_LPDDR3_WLSEL
ECT
The set of the currently selected write latency. Only certain
memory devices support WL Set B. Refer to the WRITE
Latency table in the memory vendor data sheet.
7.4.5.3 Arria 10 EMIF IP LPDDR3 Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
Table 190.
Group: FPGA IO / FPGA IO Settings
Display Name
Use default I/O settings
Identifier
PHY_LPDDR3_DEFAUL
T_IO
Description
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
310
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_LPDDR3_IO_VOL
TAGE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
Table 191.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_LPDDR3_USER_A
C_IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_LPDDR3_USER_A
C_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_LPDDR3_USER_A
C_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 192.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_LPDDR3_USER_C
K_IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_LPDDR3_USER_C
K_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_LPDDR3_USER_C
K_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
External Memory Interface Handbook Volume 2: Design Guidelines
311
7 Implementing and Parameterizing Memory IP
Table 193.
Group: FPGA IO / Data Bus
Display Name
Identifier
Use recommended initial Vrefin
PHY_LPDDR3_USER_A
UTO_STARTING_VREF
IN_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_LPDDR3_USER_D
ATA_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_LPDDR3_USER_D
ATA_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_LPDDR3_USER_D
ATA_OUT_MODE_ENU
M
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_LPDDR3_USER_S
TARTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 194.
Description
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_LPDDR3_USER_P
LL_REF_CLK_IO_STD_
ENUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_LPDDR3_USER_R
ZQ_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
7.4.5.4 Arria 10 EMIF IP LPDDR3 Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 195.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Speed bin
MEM_LPDDR3_SPEED
BIN_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
tDH (base) DC level
MEM_LPDDR3_TDH_D
C_MV
tDH (base) DC level refers to the voltage level which the
data bus must not cross during the hold window. The signal
is considered stable only if it remains above this voltage
level (for a logic 1) or below this voltage level (for a logic 0)
for the entire hold period.
tDH (base)
MEM_LPDDR3_TDH_P
S
tDH (base) refers to the hold time for the Data (DQ) bus
after the rising edge of CK.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
312
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tDQSCK
MEM_LPDDR3_TDQSC
K_PS
tDQSCK describes the skew between the memory clock (CK)
and the input data strobes (DQS) used for reads. It is the
time between the rising data strobe edge (DQS, DQS#)
relative to the rising CK edge.
tDQSQ
MEM_LPDDR3_TDQSQ
_PS
tDQSQ describes the latest valid transition of the associated
DQ pins for a READ. tDQSQ specifically refers to the DQS,
DQS# to DQ skew. It is the length of time between the
DQS, DQS# crossing to the last valid transition of the
slowest DQ pin in the DQ group associated with that DQS
strobe.
tDSH
MEM_LPDDR3_TDSH_
CYC
tDSH specifies the write DQS hold time. This is the time
difference between the rising CK edge and the falling edge
of DQS, measured as a percentage of tCK.
tDSS
MEM_LPDDR3_TDSS_
CYC
tDSS describes the time between the falling edge of DQS to
the rising edge of the next CK transition.
tDS (base) AC level
MEM_LPDDR3_TDS_A
C_MV
tDS (base) AC level refers to the voltage level which the
data bus must cross and remain above during the setup
margin window. The signal is considered stable only if it
remains above this voltage level (for a logic 1) or below this
voltage level (for a logic 0) for the entire setup period.
tDS (base)
MEM_LPDDR3_TDS_P
S
tDS(base) refers to the setup time for the Data (DQ) bus
before the rising edge of the DQS strobe.
tIHCA (base) DC level
MEM_LPDDR3_TIH_D
C_MV
DC level of tIH (base) for derating purpose
tIHCA (base)
MEM_LPDDR3_TIH_PS
Address and control hold after CK clock rise
tINIT
MEM_LPDDR3_TINIT_
US
tINIT describes the time duration of the memory
initialization after a device power-up. After RESET_n is deasserted, wait for another 500us until CKE becomes active.
During this time, the DRAM will start internal initialization;
this will be done independently of external clocks.
tISCA (base) AC level
MEM_LPDDR3_TIS_AC
_MV
AC level of tIS (base) for derating purpose
tISCA (base)
MEM_LPDDR3_TIS_PS
Address and control setup to CK clock rise
tMRR
MEM_LPDDR3_TMRR_
CK_CYC
tMRR describes the minimum MODE REGISTER READ
command period.
tMRW
MEM_LPDDR3_TMRW_
CK_CYC
tMRW describes the minimum MODE REGISTER WRITE
command period.
tQH
MEM_LPDDR3_TQH_C
YC
tQH specifies the output hold time for the DQ in relation to
DQS, DQS#. It is the length of time between the DQS,
DQS# crossing to the earliest invalid transition of the
fastest DQ pin in the DQ group associated with that DQS
strobe.
tQSH
MEM_LPDDR3_TQSH_
CYC
tQSH refers to the differential High Pulse Width, which is
measured as a percentage of tCK. It is the time during
which the DQS is high for a read.
tRAS
MEM_LPDDR3_TRAS_
NS
tRAS describes the activate to precharge duration. A row
cannot be deactivated until the tRAS time has been met.
Therefore tRAS determines how long the memory has to
wait after a activate command before a precharge command
can be issued to close the row.
tRCD
MEM_LPDDR3_TRCD_
NS
tRCD, row command delay, describes the amount of delay
between the activation of a row through the RAS command
and the access to the data through the CAS command.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
313
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tWLH
MEM_LPDDR3_TWLH_
PS
tWLH describes the write leveling hold time from the rising
edge of DQS to the rising edge of CK.
tWLS
MEM_LPDDR3_TWLS_
PS
tWLS describes the write leveling setup time. It is measured
from the rising edge of CK to the rising edge of DQS.
tWR
MEM_LPDDR3_TWR_N
S
tWR refers to the Write Recovery time. It specifies the
amount of clock cycles needed to complete a write before a
precharge command can be issued.
Table 196.
Group: Mem Timing / Parameters dependent on Speed Bin, Operating
Frequency, and Page Size
Display Name
Identifier
Description
tFAW
MEM_LPDDR3_TFAW_
NS
tFAW refers to the four activate window time. It describes
the period of time during which only four banks can be
active.
tRRD
MEM_LPDDR3_TRRD_
CYC
tRRD refers to the Row Active to Row Active Delay. It is the
minimum time interval (measured in memory clock cycles)
between two activate commands to rows in different banks
in the same rank
tRTP
MEM_LPDDR3_TRTP_C
YC
tRTP refers to the internal READ Command to PRECHARGE
Command delay. It is the number of memory clock cycles
that is needed between a read command and a precharge
command to the same rank.
tWTR
MEM_LPDDR3_TWTR_
CYC
tWTR or Write Timing Parameter describes the delay from
start of internal write transaction to internal read command,
for accesses to the same bank. The delay is measured from
the first rising memory clock edge after the last write data
is received to the rising memory clock edge when a read
command is received.
Table 197.
Group: Mem Timing / Parameters dependent on Density and Temperature
Display Name
Identifier
Description
tREFI
MEM_LPDDR3_TREFI_
US
tREFI refers to the average periodic refresh interval. It is
the maximum amount of time the memory can tolerate in
between each refresh command
tRFCab
MEM_LPDDR3_TRFC_
NS
Auto-refresh command interval (all banks)
7.4.5.5 Arria 10 EMIF IP LPDDR3 Parameters: Board
Table 198.
Group: Board / Intersymbol Interference/Crosstalk
Display Name
Identifier
Description
Address and command ISI/
crosstalk
BOARD_LPDDR3_USE
R_AC_ISI_NS
The address and command window reduction due to
intersymbol interference and crosstalk effects. The number
to be entered is the total of the measured loss of margin on
the setup side plus the measured loss of margin on the hold
side. Refer to the EMIF Simulation Guidance wiki page for
additional information.
Read DQS/DQS# ISI/crosstalk
BOARD_LPDDR3_USE
R_RCLK_ISI_NS
The reduction of the read data window due to intersymbol
interference and crosstalk effects on the DQS/DQS# signal
when driven by the memory device during a read. The
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
314
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
number to be entered is the total of the measured loss of
margin on the setup side plus the measured loss of margin
on the hold side. Refer to the EMIF Simulation Guidance
wiki page for additional information.
Read DQ ISI/crosstalk
BOARD_LPDDR3_USE
R_RDATA_ISI_NS
The reduction of the read data window due to intersymbol
inteference and crosstalk effects on the DQ signal when
driven by the memory device during a read. The number to
be entered is the total of the measured loss of margin on
the setup side plus the measured loss of margin on the hold
side. Refer to the EMIF Simulation Guidance wiki page for
additional information.
Write DQS/DQS# ISI/crosstalk
BOARD_LPDDR3_USE
R_WCLK_ISI_NS
The reduction of the write data window due to intersymbol
interference and crosstalk effects on the DQS/DQS# signal
when driven by the FPGA during a write. The number to be
entered is the total of the measured loss of margin on the
setup side plus the measured loss of margin on the hold
side. Refer to the EMIF Simulation Guidance wiki page for
additional information.
Write DQ ISI/crosstalk
BOARD_LPDDR3_USE
R_WDATA_ISI_NS
The reduction of the write data window due to intersymbol
interference and crosstalk effects on the DQ signal when
driven by the FPGA during a write. The number to be
entered is the total of the measured loss of margin on the
setup side plus the measured loss of margin on the hold
side. Refer to the EMIF Simulation Guidance wiki page for
additional information.
Use default ISI/crosstalk
values
BOARD_LPDDR3_USE
_DEFAULT_ISI_VALUE
S
You can enable this option to use default intersymbol
interference and crosstalk values for your topology. Note
that the default values are not optimized for your board. For
optimal signal integrity, it is recommended that you do not
enable this parameter, but instead perform I/O simulation
using IBIS models and Hyperlynx)*, and manually enter
values based on your simulation results, instead of using
the default values.
Table 199.
Group: Board / Board and Package Skews
Display Name
Identifier
Description
Average delay difference
between address/command
and CK
BOARD_LPDDR3_AC_
TO_CK_SKEW_NS
The average delay difference between the address/
command signals and the CK signal, calculated by averaging
the longest and smallest address/command signal trace
delay minus the maximum CK trace delay. Positive values
represent address and command signals that are longer
than CK signals and negative values represent address and
command signals that are shorter than CK signals.
Maximum board skew within
DQS group
BOARD_LPDDR3_BRD
_SKEW_WITHIN_DQS
_NS
The largest skew between all DQ and DM pins in a DQS
group. This value affects the read capture and write
margins.
Average delay difference
between DQS and CK
BOARD_LPDDR3_DQS
_TO_CK_SKEW_NS
The average delay difference between the DQS signals and
the CK signal, calculated by averaging the longest and
smallest DQS trace delay minus the CK trace delay. Positive
values represent DQS signals that are longer than CK
signals and negative values represent DQS signals that are
shorter than CK signals.
Package deskewed with board
layout (address/command
bus)
BOARD_LPDDR3_IS_S
KEW_WITHIN_AC_DE
SKEWED
Enable this parameter if you are compensating for package
skew on the address, command, control, and memory clock
buses in the board layout. Include package skew in
calculating the following board skew parameters.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
315
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Package deskewed with board
layout (DQS group)
BOARD_LPDDR3_IS_S
KEW_WITHIN_DQS_D
ESKEWED
Enable this parameter if you are compensating for package
skew on the DQ, DQS, and DM buses in the board layout.
Include package skew in calculating the following board
skew parameters.
Maximum CK delay to device
BOARD_LPDDR3_MAX
_CK_DELAY_NS
The maximum CK delay to device refers to the delay of the
longest CK trace from the FPGA to any device.
Maximum DQS delay to device
BOARD_LPDDR3_MAX
_DQS_DELAY_NS
The maximum DQS delay to device refers to the delay of
the longest DQS trace from the FPGA to any device
Maximum system skew within
address/command bus
BOARD_LPDDR3_PKG
_BRD_SKEW_WITHIN
_AC_NS
Maximum system skew within address/command bus refers
to the largest skew between the address and command
signals.
Maximum delay difference
between devices
BOARD_LPDDR3_SKE
W_BETWEEN_DIMMS_
NS
This parameter describes the largest propagation delay on
the DQ signals between ranks. For example, in a two-rank
configuration where devices are placed in series, there is an
extra propagation delay for DQ signals going to and coming
back from the furthest device compared to the nearest
device. This parameter is only applicable when there is
more than one rank.
Maximum skew between DQS
groups
BOARD_LPDDR3_SKE
W_BETWEEN_DQS_N
S
The largest skew between DQS signals.
7.4.5.6 Arria 10 EMIF IP LPDDR3 Parameters: Controller
Table 200.
Group: Controller / Low Power Mode
Display Name
Identifier
Description
Auto Power-Down Cycles
CTRL_LPDDR3_AUTO_
POWER_DOWN_CYCS
Specifies the number of idle controller cycles after which the
memory device is placed into power-down mode. You can
configure the idle waiting time. The supported range for
number of cycles is from 1 to 65534.
Enable Auto Power-Down
CTRL_LPDDR3_AUTO_
POWER_DOWN_EN
Enable this parameter to have the controller automatically
place the memory device into power-down mode after a
specified number of idle controller clock cycles. The idle wait
time is configurable. All ranks must be idle to enter auto
power-down.
Table 201.
Group: Controller / Efficiency
Display Name
Identifier
Description
Address Ordering
CTRL_LPDDR3_ADDR_
ORDER_ENUM
Controls the mapping between Avalon addresses and
memory device addresses. By changing the value of this
parameter, you can change the mappings between the
Avalon-MM address and the DRAM address.
Enable Auto-Precharge Control
CTRL_LPDDR3_AUTO_
PRECHARGE_EN
Select this parameter to enable the auto-precharge control
on the controller top level. If you assert the auto-precharge
control signal while requesting a read or write burst, you
can specify whether the controller should close (autoprecharge) the currently open page at the end of the read
or write burst, potentially making a future access to a
different page of the same bank faster.
Enable Reordering
CTRL_LPDDR3_REORD
ER_EN
Enable this parameter to allow the controller to perform
command and data reordering. Reordering can improve
efficiency by reducing bus turnaround time and row/bank
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
316
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
switching time. Data reordering allows the single-port
memory controller to change the order of read and write
commands to achieve highest efficiency. Command
reordering allows the controller to issue bank management
commands early based on incoming patterns, so that the
desired row in memory is already open when the command
reaches the memory interface. For more information, refer
to the Data Reordering topic in the EMIF Handbook.
Starvation limit for each
command
CTRL_LPDDR3_STARV
E_LIMIT
Specifies the number of commands that can be served
before a waiting command is served. The controller employs
a counter to ensure that all requests are served after a predefined interval -- this ensures that low priority requests are
not ignored, when doing data reordering for efficiency. The
valid range for this parameter is from 1 to 63. For more
information, refer to the Starvation Control topic in the EMIF
Handbook.
Enable Command Priority
Control
CTRL_LPDDR3_USER_
PRIORITY_EN
Select this parameter to enable user-requested command
priority control on the controller top level. This parameter
instructs the controller to treat a read or write request as
high-priority. The controller attempts to fill high-priority
requests sooner, to reduce latency. Connect this interface to
the conduit of your logic block that determines when the
external memory interface IP treats the read or write
request as a high-priority command.
Table 202.
Group: Controller / Configuration, Status, and Error Handling
Display Name
Enable Memory-Mapped
Configuration and Status
Register (MMR) Interface
Table 203.
Identifier
Description
CTRL_LPDDR3_MMR_
EN
Enable this parameter to change or read memory timing
parameters, memory address size, mode register settings,
controller status, and request sideband operations.
Group: Controller / Data Bus Turnaround Time
Display Name
Identifier
Description
Additional read-to-read
turnaround time (different
ranks)
CTRL_LPDDR3_RD_TO
_RD_DIFF_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a read of another logical rank. This can
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (different
ranks)
CTRL_LPDDR3_RD_TO
_WR_DIFF_CHIP_DEL
TA_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (same rank)
CTRL_LPDDR3_RD_TO
_WR_SAME_CHIP_DEL
TA_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read to a write
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
Additional write-to-read
turnaround time (different
ranks)
CTRL_LPDDR3_WR_T
O_RD_DIFF_CHIP_DE
LTA_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a read of another logical rank. This can help
resolve bus contention problems specific to your board
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
317
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional write-to-read
turnaround time (same rank)
CTRL_LPDDR3_WR_T
O_RD_SAME_CHIP_D
ELTA_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write to a read
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
Additional write-to-write
turnaround time (different
ranks)
CTRL_LPDDR3_WR_T
O_WR_DIFF_CHIP_DE
LTA_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
7.4.5.7 Arria 10 EMIF IP LPDDR3 Parameters: Diagnostics
Table 204.
Group: Diagnostics / Simulation Options
Display Name
Identifier
Abstract phy for fast simulation
DIAG_LPDDR3_ABSTR
ACT_PHY
Specifies that the system use Abstract PHY for simulation.
Abstract PHY replaces the PHY with a model for fast
simulation and can reduce simulation time by 2-3 times.
Abstract PHY is available for certain protocols and device
families, and only when you select Skip Calibration.
Calibration mode
DIAG_SIM_CAL_MODE
_ENUM
Specifies whether to skip memory interface calibration
during simulation, or to simulate the full calibration process.
Simulating the full calibration process can take hours (or
even days), depending on the width and depth of the
memory interface. You can achieve much faster simulation
times by skipping the calibration process, but that is only
expected to work when the memory model is ideal and the
interconnect delays are zero. If you enable this parameter,
the interface still performs some memory initialization
before starting normal operations. Abstract PHY is
supported with skip calibration.
Table 205.
Description
Group: Diagnostics / Calibration Debug Options
Display Name
Identifier
Description
Enable Daisy-Chaining for
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_MASTER
Specifies that the IP export an Avalon-MM master interface
(cal_debug_out) which can connect to the cal_debug
interface of other EMIF cores residing in the same I/O
column. This parameter applies only if the EMIF Debug
Toolkit or On-Chip Debug Port is enabled. Refer to the
Debugging Multiple EMIFs wiki page for more information
about debugging multiple EMIFs.
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_SLAVE
Specifies the connectivity of an Avalon slave interface for
use by the Quartus Prime EMIF Debug Toolkit or user core
logic. If you set this parameter to "Disabled," no debug
features are enabled. If you set this parameter to "Export,"
an Avalon slave interface named "cal_debug" is exported
from the IP. To use this interface with the EMIF Debug
Toolkit, you must instantiate and connect an EMIF debug
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
318
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
interface IP core to it, or connect it to the cal_debug_out
interface of another EMIF core. If you select "Add EMIF
Debug Interface", an EMIF debug interface component
containing a JTAG Avalon Master is connected to the debug
port, allowing the core to be accessed by the EMIF Debug
Toolkit. Only one EMIF debug interface should be
instantiated per I/O column. You can chain additional EMIF
or PHYLite cores to the first by enabling the "Enable DaisyChaining for Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option for all cores in the chain, and selecting
"Export" for the "Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option on all cores after the first.
Interface ID
DIAG_INTERFACE_ID
Identifies interfaces within the I/O column, for use by the
EMIF Debug Toolkit and the On-Chip Debug Port. Interface
IDs should be unique among EMIF cores within the same
I/O column. If the Quartus Prime EMIF Debug Toolkit/OnChip Debug Port parameter is set to Disabled, the interface
ID is unused.
Skip address/command
deskew calibration
DIAG_LPDDR3_SKIP_
CA_DESKEW
Specifies to skip the address/command deskew calibration
stage. Address/command deskew performs per-bit deskew
for the address and command pins.
Skip address/command
leveling calibration
DIAG_LPDDR3_SKIP_
CA_LEVEL
Specifies to skip the address/command leveling stage
during calibration. Address/command leveling attempts to
center the memory clock edge against CS# by adjusting
delay elements inside the PHY, and then applying the same
delay offset to the rest of the address and command pins.
Use Soft NIOS Processor for
On-Chip Debug
DIAG_SOFT_NIOS_M
ODE
Enables a soft Nios processor as a peripheral component to
access the On-Chip Debug Port. Only one interface in a
column can activate this option.
Table 206.
Group: Diagnostics / Example Design
Display Name
Identifier
Description
Enable In-System-Sourcesand-Probes
DIAG_EX_DESIGN_IS
SP_EN
Enables In-System-Sources-and-Probes in the example
design for common debug signals, such as calibration status
or example traffic generator per-bit status. This parameter
must be enabled if you want to do driver margining.
Number of core clocks sharing
slaves to instantiate in the
example design
DIAG_EX_DESIGN_NU
M_OF_SLAVES
Specifies the number of core clock sharing slaves to
instantiate in the example design. This parameter applies
only if you set the "Core clocks sharing" parameter in the
"General" tab to Master or Slave.
Table 207.
Group: Diagnostics / Traffic Generator
Display Name
Identifier
Description
Bypass the default traffic
pattern
DIAG_BYPASS_DEFAU
LT_PATTERN
Specifies that the controller/interface bypass the traffic
generator 2.0 default pattern after reset. If you do not
enable this parameter, the traffic generator does not assert
a pass or fail status until the generator is configured and
signaled to start by its Avalon configuration interface.
Bypass the traffic generator
repeated-writes/repeatedreads test pattern
DIAG_BYPASS_REPEA
T_STAGE
Specifies that the controller/interface bypass the traffic
generator's repeat test stage. If you do not enable this
parameter, every write and read is repeated several times.
Bypass the traffic generator
stress pattern
DIAG_BYPASS_STRES
S_STAGE
Specifies that the controller/interface bypass the traffic
generator's stress pattern stage. (Stress patterns are meant
to create worst-case signal integrity patterns on the data
pins.) If you do not enable this parameter, the traffic
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
319
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
generator does not assert a pass or fail status until the
generator is configured and signaled to start by its Avalon
configuration interface.
Bypass the user-configured
traffic stage
DIAG_BYPASS_USER_
STAGE
Specifies that the controller/interface bypass the userconfigured traffic generator's pattern after reset. If you do
not enable this parameter, the traffic generator does not
assert a pass or fail status until the generator is configured
and signaled to start by its Avalon configuration interface.
Configuration can be done by connecting to the traffic
generator via the EMIF Debug Toolkit, or by using custom
logic connected to the Avalon-MM configuration slave port
on the traffic generator. Configuration can also be simulated
using the example testbench provided in the
altera_emif_avl_tg_2_tb.sv file.
Run diagnostic on infinite test
duration
DIAG_INFI_TG2_ERR_
TEST
Specifies that the traffic generator run indefinitely until the
first error is detected.
Export Traffic Generator 2.0
configuration interface
DIAG_TG_AVL_2_EXP
ORT_CFG_INTERFACE
Specifies that the IP export an Avalon-MM slave port for
configuring the Traffic Generator. This is required only if you
are configuring the traffic generator through user logic and
not through through the EMIF Debug Toolkit.
Use configurable Avalon traffic
generator 2.0
DIAG_USE_TG_AVL_2
This option allows users to add the new configurable Avalon
traffic generator to the example design.
Table 208.
Group: Diagnostics / Performance
Display Name
Enable Efficiency Monitor
Table 209.
Identifier
DIAG_EFFICIENCY_M
ONITOR
Description
Adds an Efficiency Monitor component to the Avalon-MM
interface of the memory controller, allowing you to view
efficiency statistics of the interface. You can access the
efficiency statistics using the EMIF Debug Toolkit.
Group: Diagnostics / Miscellaneous
Display Name
Identifier
Use short Qsys interface names
SHORT_QSYS_INTERF
ACE_NAMES
Description
Specifies the use of short interface names, for improved
usability and consistency with other Qsys components. If
this parameter is disabled, the names of Qsys interfaces
exposed by the IP will include the type and direction of the
interface. Long interface names are supported for
backward-compatibility and will be removed in a future
release.
7.4.5.8 Arria 10 EMIF IP LPDDR3 Parameters: Example Designs
Table 210.
Group: Example Designs / Available Example Designs
Display Name
Select design
Identifier
Description
EX_DESIGN_GUI_LPD
DR3_SEL_DESIGN
Specifies the creation of a full Quartus Prime project,
instantiating an external memory interface and an example
traffic generator, according to your parameterization. After
the design is created, you can specify the target device and
pin location assignments, run a full compilation, verify
timing closure, and test the interface on your board using
the programming file created by the Quartus Prime
assembler. The 'Generate Example Design' button lets you
generate simulation or synthesis file sets.
External Memory Interface Handbook Volume 2: Design Guidelines
320
7 Implementing and Parameterizing Memory IP
Table 211.
Group: Example Designs / Example Design Files
Display Name
Identifier
Description
Simulation
EX_DESIGN_GUI_LPD
DR3_GEN_SIM
Specifies that the 'Generate Example Design' button create
all necessary file sets for simulation. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, simulation file sets are not created.
Instead, the output directory will contain the ed_sim.qsys
file which holds Qsys details of the simulation example
design, and a make_sim_design.tcl file with other
corresponding tcl files. You can run make_sim_design.tcl
from a command line to generate the simulation example
design. The generated example designs for various
simulators are stored in the /sim sub-directory.
Synthesis
EX_DESIGN_GUI_LPD
DR3_GEN_SYNTH
Specifies that the 'Generate Example Design' button create
all necessary file sets for synthesis. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, synthesis file sets are not created.
Instead, the output directory will contain the ed_synth.qsys
file which holds Qsys details of the synthesis example
design, and a make_qii_design.tcl script with other
corresponding tcl files. You can run make_qii_design.tcl
from a command line to generate the synthesis example
design. The generated example design is stored in the /qii
sub-directory.
Table 212.
Group: Example Designs / Generated HDL Format
Display Name
Simulation HDL format
Table 213.
Identifier
Description
EX_DESIGN_GUI_LPD
DR3_HDL_FORMAT
This option lets you choose the format of HDL in which
generated simulation files are created.
Group: Example Designs / Target Development Kit
Display Name
Select board
Identifier
Description
EX_DESIGN_GUI_LPD
DR3_TARGET_DEV_KI
T
Specifies that when you select a development kit with a
memory module, the generated example design contains all
settings and fixed pin assignments to run on the selected
board. You must select a development kit preset to
generate a working example design for the specified
development kit. Any IP settings not applied directly from a
development kit preset will not have guaranteed results
when testing the development kit. To exclude hardware
support of the example design, select 'none' from the
'Select board' pull down menu. When you apply a
development kit preset, all IP parameters are automatically
set appropriately to match the selected preset. If you want
to save your current settings, you should do so before you
apply the preset. You can save your settings under a
different name using File->Save as.
7.4.5.9 About Memory Presets
Presets help simplify the process of copying memory parameter values from memory
device data sheets to the EMIF parameter editor.
For DDRx protocols, the memory presets are named using the following convention:
PROTOCOL-SPEEDBIN LATENCY FORMAT-AND-TOPOLOGY CAPACITY (INTERNAL-ORGANIZATION)
External Memory Interface Handbook Volume 2: Design Guidelines
321
7 Implementing and Parameterizing Memory IP
For example, the preset named DDR4-2666U CL18 Component 1CS 2Gb (512Mb
x 4) refers to a DDR4 x4 component rated at the DDR4-2666U JEDEC speed bin, with
nominal CAS latency of 18 cycles, one chip-select, and a total memory space of 2Gb.
The JEDEC memory specification defines multiple speed bins for a given frequency
(that is, DDR4-2666U and DDR4-2666V). You may be able to determine the exact
speed bin implemented by your memory device using its nominal latency. When in
doubt, contact your memory vendor.
For RLDRAMx and QDRx protocols, the memory presets are named based on the
vendor's device part number.
When the preset list does not contain the exact configuration required, you can still
minimize data entry by selecting the preset closest to your configuration and then
modify parameters as required.
Prior to production you should always review the parameter values to ensure that they
match your memory device data sheet, regardless of whether a preset is used or not.
Incorrect memory parameters can cause functional failures.
7.4.6 Arria 10 EMIF IP QDR-IV Parameters
The Arria 10 EMIF IP parameter editor allows you to parameterize settings for the
Arria 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
The following tables describe the parameterization settings available in the parameter
editor for the Arria 10 EMIF IP.
7.4.6.1 Arria 10 EMIF IP QDR-IV Parameters: General
Table 214.
Group: General / FPGA
Display Name
Speed grade
Table 215.
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
Group: General / Interface
Display Name
Configuration
Identifier
PHY_CONFIG_ENUM
Description
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
322
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
Table 216.
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
Group: General / Clocks
Display Name
Identifier
Description
Core clocks sharing
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
Use recommended PLL
reference clock frequency
PHY_QDR4_DEFAULT_
REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
323
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
Table 217.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
PLL_USER_NUM_OF_E
XTRA_CLKS
Description
Specifies the number of additional output clocks to create
from the PLL.
7.4.6.2 Arria 10 EMIF IP QDR-IV Parameters: Memory
Table 218.
Group: Memory / Topology
Display Name
Identifier
Description
Address width
MEM_QDR4_ADDR_WI
DTH
Number of address pins.
DINVA / DINVB width
MEM_QDR4_DINV_PE
R_PORT_WIDTH
Number of DINV pins for port A or B of the memory
interface. Automatically calculated based on the DQ width
per device and whether width expansion is enabled. Two
memory input pins without expansion and four pins with
width expansion.
DKA / DKB width
MEM_QDR4_DK_PER_
PORT_WIDTH
Number of DK clock pairs for port A or B of the memory
interface. Automatically calculated based on the DQ width
per device and whether width expansion is enabled. Two
memory input pins without expansion and four pins with
width expansion.
DQ width per device
MEM_QDR4_DQ_PER_
PORT_PER_DEVICE
Specifies number of DQ pins per RLDRAM3 device and
number of DQ pins per port per QDR IV device. Available
widths for DQ are x18 and x36.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
324
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
DQA / DQB width
MEM_QDR4_DQ_PER_
PORT_WIDTH
Number of DQ pins for port A or B of the memory interface.
Automatically calculated based on the DQ width per device
and whether width expansion is enabled. The interface
supports a width expansion configuration up to 72-bits
QKA / QKB width
MEM_QDR4_QK_PER_
PORT_WIDTH
Number of QK clock pairs for port A or B of the memory
interface. Automatically calculated based on the DQ width
per device and whether width expansion is enabled. Two
memory input pins without expansion and four pins with
width expansion.
Enable width expansion
MEM_QDR4_WIDTH_E
XPANDED
Indicates whether to combine two memory devices to
double the data bus width. With two devices, the interface
supports a width expansion configuration up to 72-bits. For
width expansion configuration, the address and control
signals are routed to 2 devices.
Table 219.
Group: Memory / Configuration Register Settings
Display Name
Identifier
Description
ODT (Address/Command)
MEM_QDR4_AC_ODT_
MODE_ENUM
Determines the configuration register setting that controls
the address/command ODT setting.
Address bus inversion
MEM_QDR4_ADDR_IN
V_ENA
Enable address bus inversion. AINV are all active high at
memory device.
ODT (Clock)
MEM_QDR4_CK_ODT_
MODE_ENUM
Determines the configuration register setting that controls
the clock ODT setting.
Data bus inversion
MEM_QDR4_DATA_IN
V_ENA
Enable data bus inversion for DQ pins. DINVA[1:0] and
DINVB[1:0] are all active high. When set to 1, the
corresponding bus is inverted at memory device. If the data
inversion feature is programmed to be OFF, then the DINVA/
DINVB output bits will always be driven to 0.
ODT (Data)
MEM_QDR4_DATA_OD
T_MODE_ENUM
Determines the configuration register setting that controls
the data ODT setting.
Output drive (pull-down)
MEM_QDR4_PD_OUTP
UT_DRIVE_MODE_EN
UM
Determines the configuration register setting that controls
the pull-down output drive setting.
Output drive (pull-up)
MEM_QDR4_PU_OUTP
UT_DRIVE_MODE_EN
UM
Determines the configuration register setting that controls
the pull-up output drive setting.
7.4.6.3 Arria 10 EMIF IP QDR-IV Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
Table 220.
Group: FPGA IO / FPGA IO Settings
Display Name
Use default I/O settings
Identifier
Description
PHY_QDR4_DEFAULT_
IO
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
325
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_QDR4_IO_VOLTA
GE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
Table 221.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_QDR4_USER_AC
_IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_QDR4_USER_AC
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_QDR4_USER_AC
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 222.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_QDR4_USER_CK
_IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_QDR4_USER_CK
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_QDR4_USER_CK
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
External Memory Interface Handbook Volume 2: Design Guidelines
326
7 Implementing and Parameterizing Memory IP
Table 223.
Group: FPGA IO / Data Bus
Display Name
Identifier
Use recommended initial Vrefin
PHY_QDR4_USER_AU
TO_STARTING_VREFI
N_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_QDR4_USER_DA
TA_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_QDR4_USER_DA
TA_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_QDR4_USER_DA
TA_OUT_MODE_ENUM
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_QDR4_USER_STA
RTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 224.
Description
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_QDR4_USER_PLL
_REF_CLK_IO_STD_E
NUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_QDR4_USER_RZ
Q_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
7.4.6.4 Arria 10 EMIF IP QDR-IV Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 225.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Speed bin
MEM_QDR4_SPEEDBI
N_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
tASH
MEM_QDR4_TASH_PS
tASH provides the setup/hold window requirement for the
address bus in relation to the CK clock. Because the
individual signals in the address bus may not be perfectly
aligned with each other, this parameter describes the
intersection window for all the individual address signals
setup/hold margins.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
327
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tCKDK_max
MEM_QDR4_TCKDK_M
AX_PS
tCKDK_max refers to the maximum skew from the memory
clock (CK) to the write strobe (DK).
tCKDK_min
MEM_QDR4_TCKDK_M
IN_PS
tCKDK_min refers to the minimum skew from the memory
clock (CK) to the write strobe (DK).
tCKQK_max
MEM_QDR4_TCKQK_M
AX_PS
tCKQK_max refers to the maximum skew from the memory
clock (CK) to the read strobe (QK).
tCSH
MEM_QDR4_TCSH_PS
tCSH provides the setup/hold window requirement for the
control bus (LD#, RW#) in relation to the CK clock. Because
the individual signals in the control bus may not be perfectly
aligned with each other, this parameter describes the
intersection window for all the individual control signals
setup/hold margins.
tISH
MEM_QDR4_TISH_PS
tISH provides the setup/hold window requirement for the
entire data bus (DK or DINV) in all the data groups with
respect to the DK clock. After deskew calibration, this
parameter describes the intersection window for all the
individual data bus signals setup/hold margins.
tQH
MEM_QDR4_TQH_CYC
tQH specifies the output hold time for the DQ/DINV in
relation to QK.
tQKQ_max
MEM_QDR4_TQKQ_M
AX_PS
tQKQ_max describes the maximum skew between the read
strobe (QK) clock edge to the data bus (DQ/DINV) edge.
7.4.6.5 Arria 10 EMIF IP QDR-IV Parameters: Board
Table 226.
Group: Board / Intersymbol Interference/Crosstalk
Display Name
Identifier
Description
Address and command ISI/
crosstalk
BOARD_QDR4_USER_
AC_ISI_NS
The address and command window reduction due to ISI and
crosstalk effects. The number to be entered is the total of
the measured loss of margin on the setup side plus the
measured loss of margin on the hold side. Refer to the EMIF
Simulation Guidance wiki page for additional information.
QK/QK# ISI/crosstalk
BOARD_QDR4_USER_
RCLK_ISI_NS
QK/QK# ISI/crosstalk describes the reduction of the read
data window due to intersymbol interference and crosstalk
effects on the QK/QK# signal when driven by the memory
device during a read. The number to be entered in the
Quartus Prime software is the total of the measured loss of
margin on the setup side plus the measured loss of margin
on the hold side. Refer to the EMIF Simulation Guidance
wiki page for additional information.
Read DQ ISI/crosstalk
BOARD_QDR4_USER_
RDATA_ISI_NS
The reduction of the read data window due to ISI and
crosstalk effects on the DQ signal when driven by the
memory device during a read. The number to be entered is
the total of the measured loss of margin on the setup side
plus the measured loss of margin on the hold side. Refer to
the EMIF Simulation Guidance wiki page for additional
information.
DK/DK# ISI/crosstalk
BOARD_QDR4_USER_
WCLK_ISI_NS
DK/DK# ISI/crosstalk describes the reduction of the write
data window due to intersymbol interference and crosstalk
effects on the DK/DK# signal when driven by the FPGA
during a write. The number to be entered in the Quartus
Prime software is the total of the measured loss of margin
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
328
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
on the setup side plus the measured loss of margin on the
hold side. Refer to the EMIF Simulation Guidance wiki page
for additional information.
Write DQ ISI/crosstalk
BOARD_QDR4_USER_
WDATA_ISI_NS
The reduction of the write data window due to intersymbol
interference and crosstalk effects on the DQ signal when
driven by the FPGA during a write. The number to be
entered is the total of the measured loss of margin on the
setup side plus the measured loss of margin on the hold
side. Refer to the EMIF Simulation Guidance wiki page for
additional information.
Use default ISI/crosstalk
values
BOARD_QDR4_USE_D
EFAULT_ISI_VALUES
You can enable this option to use default intersymbol
interference and crosstalk values for your topology. Note
that the default values are not optimized for your board. For
optimal signal integrity, it is recommended that you do not
enable this parameter, but instead perform I/O simulation
using IBIS models and Hyperlynx*, and manually enter
values based on your simulation results, instead of using
the default values.
Table 227.
Group: Board / Board and Package Skews
Display Name
Identifier
Description
Average delay difference
between address/command
and CK
BOARD_QDR4_AC_TO
_CK_SKEW_NS
The average delay difference between the address/
command signals and the CK signal, calculated by averaging
the longest and smallest address/command signal trace
delay minus the maximum CK trace delay. Positive values
represent address and command signals that are longer
than CK signals and negative values represent address and
command signals that are shorter than CK signals.
Average delay difference
between DK and CK
BOARD_QDR4_DK_TO
_CK_SKEW_NS
This parameter describes the average delay difference
between the DK signals and the CK signal, calculated by
averaging the longest and smallest DK trace delay minus
the CK trace delay. Positive values represent DK signals that
are longer than CK signals and negative values represent
DK signals that are shorter than CK signals.
Package deskewed with board
layout (address/command
bus)
BOARD_QDR4_IS_SK
EW_WITHIN_AC_DES
KEWED
Enable this parameter if you are compensating for package
skew on the address, command, control, and memory clock
buses in the board layout. Include package skew in
calculating the following board skew parameters.
Package deskewed with board
layout (QK group)
BOARD_QDR4_IS_SK
EW_WITHIN_QK_DES
KEWED
If you are compensating for package skew on the QK bus in
the board layout (hence checking the box here), please
include package skew in calculating the following board
skew parameters.
Maximum CK delay to device
BOARD_QDR4_MAX_C
K_DELAY_NS
The maximum CK delay to device refers to the delay of the
longest CK trace from the FPGA to any device.
Maximum DK delay to device
BOARD_QDR4_MAX_D
K_DELAY_NS
The maximum DK delay to device refers to the delay of the
longest DK trace from the FPGA to any device.
Maximum system skew within
address/command bus
BOARD_QDR4_PKG_B
RD_SKEW_WITHIN_A
C_NS
Maximum system skew within address/command bus refers
to the largest skew between the address and command
signals.
Maximum system skew within
QK group
BOARD_QDR4_PKG_B
RD_SKEW_WITHIN_Q
K_NS
Maximum system skew within QK group refers to the largest
skew between all DQ and DM pins in a QK group. This value
can affect the read capture and write margins.
Maximum delay difference
between devices
BOARD_QDR4_SKEW_
BETWEEN_DIMMS_NS
This parameter describes the largest propagation delay on
the DQ signals between ranks. For example, in a two-rank
configuration where devices are placed in series, there is an
extra propagation delay for DQ signals going to and coming
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
329
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
back from the furthest device compared to the nearest
device. This parameter is only applicable when there is
more than one rank.
Maximum skew between DK
groups
BOARD_QDR4_SKEW_
BETWEEN_DK_NS
This parameter describes the largest skew between DK
signals in different DK groups.
7.4.6.6 Arria 10 EMIF IP QDR-IV Parameters: Controller
Table 228.
Group: Controller
Display Name
Identifier
Description
Generate power-of-2 data bus
widths for Qsys
CTRL_QDR4_AVL_ENA
BLE_POWER_OF_TWO
_BUS
If enabled, the Avalon data bus width is rounded down to
the nearest power-of-2. The width of the symbols within the
data bus is also rounded down to the nearest power-of-2.
You should only enable this option if you know you will be
connecting the memory interface to Qsys interconnect
components that require the data bus and symbol width to
be a power-of-2. If this option is enabled, you cannot utilize
the full density of the memory device. For example, in x36
data width upon selecting this parameter, will define the
Avalon data bus to 256-bit. This will ignore the upper 4-bit
of data width.
Maximum Avalon-MM burst
length
CTRL_QDR4_AVL_MAX
_BURST_COUNT
Specifies the maximum burst length on the Avalon-MM bus.
This will be used to configure the FIFOs to be able to
manage the maximum data burst. More core logic will
require for increase in FIFO length.
7.4.6.7 Arria 10 EMIF IP QDR-IV Parameters: Diagnostics
Table 229.
Group: Diagnostics / Simulation Options
Display Name
Identifier
Description
Abstract phy for fast simulation
DIAG_QDR4_ABSTRA
CT_PHY
Specifies that the system use Abstract PHY for simulation.
Abstract PHY replaces the PHY with a model for fast
simulation and can reduce simulation time by 2-3 times.
Abstract PHY is available for certain protocols and device
families, and only when you select Skip Calibration.
Calibration mode
DIAG_SIM_CAL_MODE
_ENUM
Specifies whether to skip memory interface calibration
during simulation, or to simulate the full calibration process.
Simulating the full calibration process can take hours (or
even days), depending on the width and depth of the
memory interface. You can achieve much faster simulation
times by skipping the calibration process, but that is only
expected to work when the memory model is ideal and the
interconnect delays are zero. If you enable this parameter,
the interface still performs some memory initialization
before starting normal operations. Abstract PHY is
supported with skip calibration.
External Memory Interface Handbook Volume 2: Design Guidelines
330
7 Implementing and Parameterizing Memory IP
Table 230.
Group: Diagnostics / Calibration Debug Options
Display Name
Identifier
Description
Enable Daisy-Chaining for
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_MASTER
Specifies that the IP export an Avalon-MM master interface
(cal_debug_out) which can connect to the cal_debug
interface of other EMIF cores residing in the same I/O
column. This parameter applies only if the EMIF Debug
Toolkit or On-Chip Debug Port is enabled. Refer to the
Debugging Multiple EMIFs wiki page for more information
about debugging multiple EMIFs.
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_SLAVE
Specifies the connectivity of an Avalon slave interface for
use by the Quartus Prime EMIF Debug Toolkit or user core
logic. If you set this parameter to "Disabled," no debug
features are enabled. If you set this parameter to "Export,"
an Avalon slave interface named "cal_debug" is exported
from the IP. To use this interface with the EMIF Debug
Toolkit, you must instantiate and connect an EMIF debug
interface IP core to it, or connect it to the cal_debug_out
interface of another EMIF core. If you select "Add EMIF
Debug Interface", an EMIF debug interface component
containing a JTAG Avalon Master is connected to the debug
port, allowing the core to be accessed by the EMIF Debug
Toolkit. Only one EMIF debug interface should be
instantiated per I/O column. You can chain additional EMIF
or PHYLite cores to the first by enabling the "Enable DaisyChaining for Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option for all cores in the chain, and selecting
"Export" for the "Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option on all cores after the first.
Interface ID
DIAG_INTERFACE_ID
Identifies interfaces within the I/O column, for use by the
EMIF Debug Toolkit and the On-Chip Debug Port. Interface
IDs should be unique among EMIF cores within the same
I/O column. If the Quartus Prime EMIF Debug Toolkit/OnChip Debug Port parameter is set to Disabled, the interface
ID is unused.
Skip VREF_in calibration
DIAG_QDR4_SKIP_VR
EF_CAL
Users can check this option to skip the VREF stage of
calibration. Users should enable this option for debug
purpose only; it is recommended to keep enabled during
normal opertation.
Use Soft NIOS Processor for
On-Chip Debug
DIAG_SOFT_NIOS_M
ODE
Enables a soft Nios processor as a peripheral component to
access the On-Chip Debug Port. Only one interface in a
column can activate this option.
Table 231.
Group: Diagnostics / Example Design
Display Name
Identifier
Description
Enable In-System-Sourcesand-Probes
DIAG_EX_DESIGN_IS
SP_EN
Enables In-System-Sources-and-Probes in the example
design for common debug signals, such as calibration status
or example traffic generator per-bit status. This parameter
must be enabled if you want to do driver margining.
Number of core clocks sharing
slaves to instantiate in the
example design
DIAG_EX_DESIGN_NU
M_OF_SLAVES
Specifies the number of core clock sharing slaves to
instantiate in the example design. This parameter applies
only if you set the "Core clocks sharing" parameter in the
"General" tab to Master or Slave.
External Memory Interface Handbook Volume 2: Design Guidelines
331
7 Implementing and Parameterizing Memory IP
Table 232.
Group: Diagnostics / Traffic Generator
Display Name
Identifier
Description
Bypass the default traffic
pattern
DIAG_BYPASS_DEFAU
LT_PATTERN
Specifies that the controller/interface bypass the traffic
generator 2.0 default pattern after reset. If you do not
enable this parameter, the traffic generator does not assert
a pass or fail status until the generator is configured and
signaled to start by its Avalon configuration interface.
Bypass the traffic generator
repeated-writes/repeatedreads test pattern
DIAG_BYPASS_REPEA
T_STAGE
Specifies that the controller/interface bypass the traffic
generator's repeat test stage. If you do not enable this
parameter, every write and read is repeated several times.
Bypass the traffic generator
stress pattern
DIAG_BYPASS_STRES
S_STAGE
Specifies that the controller/interface bypass the traffic
generator's stress pattern stage. (Stress patterns are meant
to create worst-case signal integrity patterns on the data
pins.) If you do not enable this parameter, the traffic
generator does not assert a pass or fail status until the
generator is configured and signaled to start by its Avalon
configuration interface.
Bypass the user-configured
traffic stage
DIAG_BYPASS_USER_
STAGE
Specifies that the controller/interface bypass the userconfigured traffic generator's pattern after reset. If you do
not enable this parameter, the traffic generator does not
assert a pass or fail status until the generator is configured
and signaled to start by its Avalon configuration interface.
Configuration can be done by connecting to the traffic
generator via the EMIF Debug Toolkit, or by using custom
logic connected to the Avalon-MM configuration slave port
on the traffic generator. Configuration can also be simulated
using the example testbench provided in the
altera_emif_avl_tg_2_tb.sv file.
Run diagnostic on infinite test
duration
DIAG_INFI_TG2_ERR_
TEST
Specifies that the traffic generator run indefinitely until the
first error is detected.
Export Traffic Generator 2.0
configuration interface
DIAG_TG_AVL_2_EXP
ORT_CFG_INTERFACE
Specifies that the IP export an Avalon-MM slave port for
configuring the Traffic Generator. This is required only if you
are configuring the traffic generator through user logic and
not through through the EMIF Debug Toolkit.
Use configurable Avalon traffic
generator 2.0
DIAG_USE_TG_AVL_2
This option allows users to add the new configurable Avalon
traffic generator to the example design.
Table 233.
Group: Diagnostics / Performance
Display Name
Enable Efficiency Monitor
Table 234.
Identifier
DIAG_EFFICIENCY_M
ONITOR
Description
Adds an Efficiency Monitor component to the Avalon-MM
interface of the memory controller, allowing you to view
efficiency statistics of the interface. You can access the
efficiency statistics using the EMIF Debug Toolkit.
Group: Diagnostics / Miscellaneous
Display Name
Identifier
Use short Qsys interface names
SHORT_QSYS_INTERF
ACE_NAMES
Description
Specifies the use of short interface names, for improved
usability and consistency with other Qsys components. If
this parameter is disabled, the names of Qsys interfaces
exposed by the IP will include the type and direction of the
interface. Long interface names are supported for
backward-compatibility and will be removed in a future
release.
7.4.6.8 Arria 10 EMIF IP QDR-IV Parameters: Example Designs
External Memory Interface Handbook Volume 2: Design Guidelines
332
7 Implementing and Parameterizing Memory IP
Table 235.
Group: Example Designs / Available Example Designs
Display Name
Select design
Table 236.
Identifier
Description
EX_DESIGN_GUI_QDR
4_SEL_DESIGN
Specifies the creation of a full Quartus Prime project,
instantiating an external memory interface and an example
traffic generator, according to your parameterization. After
the design is created, you can specify the target device and
pin location assignments, run a full compilation, verify
timing closure, and test the interface on your board using
the programming file created by the Quartus Prime
assembler. The 'Generate Example Design' button lets you
generate simulation or synthesis file sets.
Group: Example Designs / Example Design Files
Display Name
Identifier
Description
Simulation
EX_DESIGN_GUI_QDR
4_GEN_SIM
Specifies that the 'Generate Example Design' button create
all necessary file sets for simulation. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, simulation file sets are not created.
Instead, the output directory will contain the ed_sim.qsys
file which holds Qsys details of the simulation example
design, and a make_sim_design.tcl file with other
corresponding tcl files. You can run make_sim_design.tcl
from a command line to generate the simulation example
design. The generated example designs for various
simulators are stored in the /sim sub-directory.
Synthesis
EX_DESIGN_GUI_QDR
4_GEN_SYNTH
Specifies that the 'Generate Example Design' button create
all necessary file sets for synthesis. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, synthesis file sets are not created.
Instead, the output directory will contain the ed_synth.qsys
file which holds Qsys details of the synthesis example
design, and a make_qii_design.tcl script with other
corresponding tcl files. You can run make_qii_design.tcl
from a command line to generate the synthesis example
design. The generated example design is stored in the /qii
sub-directory.
Table 237.
Group: Example Designs / Generated HDL Format
Display Name
Simulation HDL format
Table 238.
Identifier
Description
EX_DESIGN_GUI_QDR
4_HDL_FORMAT
This option lets you choose the format of HDL in which
generated simulation files are created.
Group: Example Designs / Target Development Kit
Display Name
Select board
Identifier
Description
EX_DESIGN_GUI_QDR
4_TARGET_DEV_KIT
Specifies that when you select a development kit with a
memory module, the generated example design contains all
settings and fixed pin assignments to run on the selected
board. You must select a development kit preset to
generate a working example design for the specified
development kit. Any IP settings not applied directly from a
development kit preset will not have guaranteed results
when testing the development kit. To exclude hardware
support of the example design, select 'none' from the
'Select board' pull down menu. When you apply a
development kit preset, all IP parameters are automatically
set appropriately to match the selected preset. If you want
to save your current settings, you should do so before you
apply the preset. You can save your settings under a
different name using File->Save as.
External Memory Interface Handbook Volume 2: Design Guidelines
333
7 Implementing and Parameterizing Memory IP
7.4.6.9 About Memory Presets
Presets help simplify the process of copying memory parameter values from memory
device data sheets to the EMIF parameter editor.
For DDRx protocols, the memory presets are named using the following convention:
PROTOCOL-SPEEDBIN LATENCY FORMAT-AND-TOPOLOGY CAPACITY (INTERNAL-ORGANIZATION)
For example, the preset named DDR4-2666U CL18 Component 1CS 2Gb (512Mb
x 4) refers to a DDR4 x4 component rated at the DDR4-2666U JEDEC speed bin, with
nominal CAS latency of 18 cycles, one chip-select, and a total memory space of 2Gb.
The JEDEC memory specification defines multiple speed bins for a given frequency
(that is, DDR4-2666U and DDR4-2666V). You may be able to determine the exact
speed bin implemented by your memory device using its nominal latency. When in
doubt, contact your memory vendor.
For RLDRAMx and QDRx protocols, the memory presets are named based on the
vendor's device part number.
When the preset list does not contain the exact configuration required, you can still
minimize data entry by selecting the preset closest to your configuration and then
modify parameters as required.
Prior to production you should always review the parameter values to ensure that they
match your memory device data sheet, regardless of whether a preset is used or not.
Incorrect memory parameters can cause functional failures.
7.4.7 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters
The Arria 10 EMIF IP parameter editor allows you to parameterize settings for the
Arria 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
The following tables describe the parameterization settings available in the parameter
editor for the Arria 10 EMIF IP.
7.4.7.1 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: General
Table 239.
Group: General / FPGA
Display Name
Speed grade
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
External Memory Interface Handbook Volume 2: Design Guidelines
334
7 Implementing and Parameterizing Memory IP
Table 240.
Group: General / Interface
Display Name
Identifier
Description
Configuration
PHY_CONFIG_ENUM
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
Table 241.
Group: General / Clocks
Display Name
Identifier
Description
Core clocks sharing
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
Use recommended PLL
reference clock frequency
PHY_QDR2_DEFAULT_
REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
335
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
Table 242.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
PLL_USER_NUM_OF_E
XTRA_CLKS
Description
Specifies the number of additional output clocks to create
from the PLL.
7.4.7.2 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: Memory
Table 243.
Group: Memory / Topology
Display Name
Identifier
Description
Address width
MEM_QDR2_ADDR_WI
DTH
Number of address pins.
Burst length
MEM_QDR2_BL
Burst length of the memory device.
Enable BWS# pins
MEM_QDR2_BWS_EN
Indicates whether the interface uses the BWS#( Byte Write
Select) pins. If enabled, 1 BWS# pin for every 9 D pins will
be added.
BWS# width
MEM_QDR2_BWS_N_
WIDTH
Number of BWS# (Byte Write Select) pins of the memory
interface. Automatically calculated based on the data width
per device and whether width expansion is enabled. BWS#
pins are used to select which byte is written into the device
during the current portion of the write operations. Bytes not
written remain unaltered.
CQ width
MEM_QDR2_CQ_WIDT
H
Width of the CQ (read strobe) clock on the memory device.
Data width per device
MEM_QDR2_DATA_PE
R_DEVICE
Number of D and Q pins per QDR II device.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
336
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Data width
MEM_QDR2_DATA_WI
DTH
Number of D and Q pins of the memory interface.
Automatically calculated based on the D and Q width per
device and whether width expansion is enabled.
K width
MEM_QDR2_K_WIDTH
Width of the K (address, command and write strobe) clock
on the memory device.
Enable width expansion
MEM_QDR2_WIDTH_E
XPANDED
Indicates whether to combine two memory devices to
double the data bus width. With two devices, the interface
supports a width expansion configuration up to 72-bits. For
width expansion configuration, the address and control
signals are routed to 2 devices.
7.4.7.3 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
Table 244.
Group: FPGA IO / FPGA IO Settings
Display Name
Identifier
Description
Use default I/O settings
PHY_QDR2_DEFAULT_
IO
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_QDR2_IO_VOLTA
GE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
Table 245.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_QDR2_USER_AC
_IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_QDR2_USER_AC
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_QDR2_USER_AC
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
External Memory Interface Handbook Volume 2: Design Guidelines
337
7 Implementing and Parameterizing Memory IP
Table 246.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_QDR2_USER_CK
_IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_QDR2_USER_CK
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_QDR2_USER_CK
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 247.
Group: FPGA IO / Data Bus
Display Name
Identifier
Use recommended initial Vrefin
PHY_QDR2_USER_AU
TO_STARTING_VREFI
N_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_QDR2_USER_DA
TA_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_QDR2_USER_DA
TA_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_QDR2_USER_DA
TA_OUT_MODE_ENUM
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_QDR2_USER_STA
RTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 248.
Description
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_QDR2_USER_PLL
_REF_CLK_IO_STD_E
NUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_QDR2_USER_RZ
Q_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
External Memory Interface Handbook Volume 2: Design Guidelines
338
7 Implementing and Parameterizing Memory IP
7.4.7.4 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 249.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Internal Jitter
MEM_QDR2_INTERNA
L_JITTER_NS
QDRII internal jitter.
Speed bin
MEM_QDR2_SPEEDBI
N_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
tCCQO
MEM_QDR2_TCCQO_N
S
tCCQO describes the skew between the rising edge of the C
clock to the rising edge of the echo clock (CQ) in QDRII
memory devices.
tCQDOH
MEM_QDR2_TCQDOH
_NS
tCQDOH refers to the minimum time expected between the
echo clock (CQ or CQ#) edge and the last of the valid Read
data (Q).
tCQD
MEM_QDR2_TCQD_NS
tCQD refers to the maximum time expected between an
echo clock edge and valid data on the Read Data bus (Q).
tCQH
MEM_QDR2_TCQH_NS
tCQH describes the time period during which the echo clock
(CQ, #CQ) is considered logically high.
tHA
MEM_QDR2_THA_NS
tHA refers to the hold time after the rising edge of the clock
(K) to the address and command control bus (A). The
address and command control bus must remain stable for at
least tHA after the rising edge of K.
tHD
MEM_QDR2_THD_NS
tHD refers to the hold time after the rising edge of the clock
(K) to the data bus (D). The data bus must remain stable
for at least tHD after the rising edge of K.
tRL
MEM_QDR2_TRL_CYC
tRL refers to the QDR memory specific read latency. This
parameter describes the length of time after a Read
command has been registered on the rising edge of the
Write Clock (K) at the QDR memory before the first piece of
read data (Q) can be expected at the output of the memory.
It is measured in Write Clock (K) cycles. The Read Latency
is specific to a QDR memory device and cannot be modified
to a different value. The Read Latency (tRL) can have the
following values: 1.5, 2, 2,5 clk cycles.
tSA
MEM_QDR2_TSA_NS
tSA refers to the setup time for the address and command
bus (A) before the rising edge of the clock (K). The address
and command bus must be stable for at least tSA before the
rising edge of K.
tSD
MEM_QDR2_TSD_NS
tSD refers to the setup time for the data bus (D) before the
rising edge of the clock (K). The data bus must be stable for
at least tSD before the rising edge of K.
tWL
MEM_QDR2_TWL_CYC
tWL refers to the write latency requirement at the QDR
memory. This parameter describes the length of time after a
Write command has been registered at the memory on the
rising edge of the Write clock (K) before the memory
expects the Write Data (D). It is measured in (K) clock
cycles and is usually 1.
7.4.7.5 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: Board
External Memory Interface Handbook Volume 2: Design Guidelines
339
7 Implementing and Parameterizing Memory IP
Table 250.
Group: Board / Intersymbol Interference/Crosstalk
Display Name
Identifier
Description
Address and command ISI/
crosstalk
BOARD_QDR2_USER_
AC_ISI_NS
The address and command window reduction due to ISI and
crosstalk effects. The number to be entered is the total of
the measured loss of margin on the setup side plus the
measured loss of margin on the hold side. Refer to the EMIF
Simulation Guidance wiki page for additional information.
CQ/CQ# ISI/crosstalk
BOARD_QDR2_USER_
RCLK_ISI_NS
CQ/CQ# ISI/crosstalk describes the reduction of the read
data window due to intersymbol interference and crosstalk
effects on the CQ/CQ# signal when driven by the memory
device during a read. The number to be entered in the
Quartus Prime software is the total of the measured loss of
margin on the setup side plus the measured loss of margin
on the hold side. Refer to the EMIF Simulation Guidance
wiki page for additional information.
Read Q ISI/crosstalk
BOARD_QDR2_USER_
RDATA_ISI_NS
Read Q ISI/crosstalk describes the reduction of the read
data window due to intersymbol interference and crosstalk
effects on the CQ/CQ# signal when driven by the memory
device during a read. The number to be entered in the
Quartus Prime software is the total of the measured loss of
margin on the setup side plus the measured loss of margin
on the hold side. Refer to the EMIF Simulation Guidance
wiki page for additional information.
K/K# ISI/crosstalk
BOARD_QDR2_USER_
WCLK_ISI_NS
K/K# ISI/crosstalk describes the reduction of the write data
window due to intersymbol interference and crosstalk
effects on the K/K# signal when driven by the FPGA during
a write. The number to be entered in the Quartus Prime
software is the total of the measured loss of margin on the
setup side plus the measured loss of margin on the hold
side. Refer to the EMIF Simulation Guidance wiki page for
additional information.
Write D ISI/crosstalk
BOARD_QDR2_USER_
WDATA_ISI_NS
Write D ISI/crosstalk describes the reduction of the write
data window due to intersymbol interference and crosstalk
effects on the signal when driven by driven by the FPGA
during a write. The number to be entered in the Quartus
Prime software is the total of the measured loss of margin
on the setup side plus the measured loss of margin on the
hold side. Refer to the EMIF Simulation Guidance wiki page
for additional information.
Use default ISI/crosstalk
values
BOARD_QDR2_USE_D
EFAULT_ISI_VALUES
You can enable this option to use default intersymbol
interference and crosstalk values for your topology. Note
that the default values are not optimized for your board. For
optimal signal integrity, it is recommended that you do not
enable this parameter, but instead perform I/O simulation
using IBIS models and Hyperlynx*, and manually enter
values based on your simulation results, instead of using
the default values.
Table 251.
Group: Board / Board and Package Skews
Display Name
Average delay difference
between address/command
and K
Identifier
Description
BOARD_QDR2_AC_TO
_K_SKEW_NS
This parameter refers to the average delay difference
between the Address/Command signals and the K signal,
calculated by averaging the longest and smallest Address/
Command trace delay minus the maximum K trace delay.
Positive values represent address and command signals that
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
340
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
are longer than K signals and negative values represent
address and command signals that are shorter than K
signals.
Maximum board skew within D
group
BOARD_QDR2_BRD_S
KEW_WITHIN_D_NS
This parameter refers to the largest skew between all D and
BWS# signals in a D group. D pins are used for driving data
signals to the memory device during a write operation.
BWS# pins are used as Byte Write Select signals to control
which byte(s) are written to the memory during a write
operation. Users should enter their board skew only.
Package skew will be calculated automatically, based on the
memory interface configuration, and added to this value.
This value affects the read capture and write margins.a
Maximum board skew within Q
group
BOARD_QDR2_BRD_S
KEW_WITHIN_Q_NS
This parameter describes the largest skew between all Q
signals in a Q group. Q pins drive the data signals from the
memory to the FPGA when the read operation is active.
Users should enter their board skew only. Package skew will
be calculated automatically, based on the memory interface
configuration, and added to this value. This value affects the
read capture and write margins.
Package deskewed with board
layout (address/command
bus)
BOARD_QDR2_IS_SK
EW_WITHIN_AC_DES
KEWED
Enable this parameter if you are compensating for package
skew on the address, command, control, and memory clock
buses in the board layout. Include package skew in
calculating the following board skew parameters.
Package deskewed with board
layout (D group)
BOARD_QDR2_IS_SK
EW_WITHIN_D_DESK
EWED
If you are compensating for package skew on the D and
BWS# signals in the board layout (hence checking the box
here), please include package skew in calculating the
following board skew parameters.
Package deskewed with board
layout (Q group)
BOARD_QDR2_IS_SK
EW_WITHIN_Q_DESK
EWED
If you are compensating for package skew on the Q bus in
the board layout (hence checking the box here), please
include package skew in calculating the following board
skew parameters.
Maximum K delay to device
BOARD_QDR2_MAX_K
_DELAY_NS
The maximum K delay to device refers to the delay of the
longest K trace from the FPGA to any device
Maximum system skew within
address/command bus
BOARD_QDR2_PKG_B
RD_SKEW_WITHIN_A
C_NS
Maximum system skew within address/command bus refers
to the largest skew between the address and command
signals.
7.4.7.6 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: Controller
Table 252.
Group: Controller
Display Name
Identifier
Description
Generate power-of-2 data bus
widths for Qsys
CTRL_QDR2_AVL_ENA
BLE_POWER_OF_TWO
_BUS
If enabled, the Avalon data bus width is rounded down to
the nearest power-of-2. The width of the symbols within the
data bus is also rounded down to the nearest power-of-2.
You should only enable this option if you know you will be
connecting the memory interface to Qsys interconnect
components that require the data bus and symbol width to
be a power-of-2. If this option is enabled, you cannot utilize
the full density of the memory device. For example, in x36
data width upon selecting this parameter, will define the
Avalon data bus to 256-bit. This will ignore the upper 4-bit
of data width.
Maximum Avalon-MM burst
length
CTRL_QDR2_AVL_MAX
_BURST_COUNT
Specifies the maximum burst length on the Avalon-MM bus.
This will be used to configure the FIFOs to be able to
manage the maximum data burst. More core logic will
require for increase in FIFO length.
External Memory Interface Handbook Volume 2: Design Guidelines
341
7 Implementing and Parameterizing Memory IP
7.4.7.7 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: Diagnostics
Table 253.
Group: Diagnostics / Simulation Options
Display Name
Identifier
Description
Abstract phy for fast simulation
DIAG_QDR2_ABSTRA
CT_PHY
Specifies that the system use Abstract PHY for simulation.
Abstract PHY replaces the PHY with a model for fast
simulation and can reduce simulation time by 2-3 times.
Abstract PHY is available for certain protocols and device
families, and only when you select Skip Calibration.
Calibration mode
DIAG_SIM_CAL_MODE
_ENUM
Specifies whether to skip memory interface calibration
during simulation, or to simulate the full calibration process.
Simulating the full calibration process can take hours (or
even days), depending on the width and depth of the
memory interface. You can achieve much faster simulation
times by skipping the calibration process, but that is only
expected to work when the memory model is ideal and the
interconnect delays are zero. If you enable this parameter,
the interface still performs some memory initialization
before starting normal operations. Abstract PHY is
supported with skip calibration.
Table 254.
Group: Diagnostics / Calibration Debug Options
Display Name
Identifier
Description
Enable Daisy-Chaining for
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_MASTER
Specifies that the IP export an Avalon-MM master interface
(cal_debug_out) which can connect to the cal_debug
interface of other EMIF cores residing in the same I/O
column. This parameter applies only if the EMIF Debug
Toolkit or On-Chip Debug Port is enabled. Refer to the
Debugging Multiple EMIFs wiki page for more information
about debugging multiple EMIFs.
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_SLAVE
Specifies the connectivity of an Avalon slave interface for
use by the Quartus Prime EMIF Debug Toolkit or user core
logic. If you set this parameter to "Disabled," no debug
features are enabled. If you set this parameter to "Export,"
an Avalon slave interface named "cal_debug" is exported
from the IP. To use this interface with the EMIF Debug
Toolkit, you must instantiate and connect an EMIF debug
interface IP core to it, or connect it to the cal_debug_out
interface of another EMIF core. If you select "Add EMIF
Debug Interface", an EMIF debug interface component
containing a JTAG Avalon Master is connected to the debug
port, allowing the core to be accessed by the EMIF Debug
Toolkit. Only one EMIF debug interface should be
instantiated per I/O column. You can chain additional EMIF
or PHYLite cores to the first by enabling the "Enable DaisyChaining for Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option for all cores in the chain, and selecting
"Export" for the "Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option on all cores after the first.
Interface ID
DIAG_INTERFACE_ID
Identifies interfaces within the I/O column, for use by the
EMIF Debug Toolkit and the On-Chip Debug Port. Interface
IDs should be unique among EMIF cores within the same
I/O column. If the Quartus Prime EMIF Debug Toolkit/OnChip Debug Port parameter is set to Disabled, the interface
ID is unused.
Use Soft NIOS Processor for
On-Chip Debug
DIAG_SOFT_NIOS_M
ODE
Enables a soft Nios processor as a peripheral component to
access the On-Chip Debug Port. Only one interface in a
column can activate this option.
External Memory Interface Handbook Volume 2: Design Guidelines
342
7 Implementing and Parameterizing Memory IP
Table 255.
Group: Diagnostics / Example Design
Display Name
Identifier
Description
Enable In-System-Sourcesand-Probes
DIAG_EX_DESIGN_IS
SP_EN
Enables In-System-Sources-and-Probes in the example
design for common debug signals, such as calibration status
or example traffic generator per-bit status. This parameter
must be enabled if you want to do driver margining.
Number of core clocks sharing
slaves to instantiate in the
example design
DIAG_EX_DESIGN_NU
M_OF_SLAVES
Specifies the number of core clock sharing slaves to
instantiate in the example design. This parameter applies
only if you set the "Core clocks sharing" parameter in the
"General" tab to Master or Slave.
Table 256.
Group: Diagnostics / Traffic Generator
Display Name
Identifier
Description
Bypass the default traffic
pattern
DIAG_BYPASS_DEFAU
LT_PATTERN
Specifies that the controller/interface bypass the traffic
generator 2.0 default pattern after reset. If you do not
enable this parameter, the traffic generator does not assert
a pass or fail status until the generator is configured and
signaled to start by its Avalon configuration interface.
Bypass the traffic generator
repeated-writes/repeatedreads test pattern
DIAG_BYPASS_REPEA
T_STAGE
Specifies that the controller/interface bypass the traffic
generator's repeat test stage. If you do not enable this
parameter, every write and read is repeated several times.
Bypass the traffic generator
stress pattern
DIAG_BYPASS_STRES
S_STAGE
Specifies that the controller/interface bypass the traffic
generator's stress pattern stage. (Stress patterns are meant
to create worst-case signal integrity patterns on the data
pins.) If you do not enable this parameter, the traffic
generator does not assert a pass or fail status until the
generator is configured and signaled to start by its Avalon
configuration interface.
Bypass the user-configured
traffic stage
DIAG_BYPASS_USER_
STAGE
Specifies that the controller/interface bypass the userconfigured traffic generator's pattern after reset. If you do
not enable this parameter, the traffic generator does not
assert a pass or fail status until the generator is configured
and signaled to start by its Avalon configuration interface.
Configuration can be done by connecting to the traffic
generator via the EMIF Debug Toolkit, or by using custom
logic connected to the Avalon-MM configuration slave port
on the traffic generator. Configuration can also be simulated
using the example testbench provided in the
altera_emif_avl_tg_2_tb.sv file.
Run diagnostic on infinite test
duration
DIAG_INFI_TG2_ERR_
TEST
Specifies that the traffic generator run indefinitely until the
first error is detected.
Export Traffic Generator 2.0
configuration interface
DIAG_TG_AVL_2_EXP
ORT_CFG_INTERFACE
Specifies that the IP export an Avalon-MM slave port for
configuring the Traffic Generator. This is required only if you
are configuring the traffic generator through user logic and
not through through the EMIF Debug Toolkit.
Use configurable Avalon traffic
generator 2.0
DIAG_USE_TG_AVL_2
This option allows users to add the new configurable Avalon
traffic generator to the example design.
Table 257.
Group: Diagnostics / Performance
Display Name
Enable Efficiency Monitor
Identifier
Description
DIAG_EFFICIENCY_M
ONITOR
Adds an Efficiency Monitor component to the Avalon-MM
interface of the memory controller, allowing you to view
efficiency statistics of the interface. You can access the
efficiency statistics using the EMIF Debug Toolkit.
External Memory Interface Handbook Volume 2: Design Guidelines
343
7 Implementing and Parameterizing Memory IP
Table 258.
Group: Diagnostics / Miscellaneous
Display Name
Identifier
Use short Qsys interface names
SHORT_QSYS_INTERF
ACE_NAMES
Description
Specifies the use of short interface names, for improved
usability and consistency with other Qsys components. If
this parameter is disabled, the names of Qsys interfaces
exposed by the IP will include the type and direction of the
interface. Long interface names are supported for
backward-compatibility and will be removed in a future
release.
7.4.7.8 Arria 10 EMIF IP QDR II/II+/II+ Xtreme Parameters: Example Designs
Table 259.
Group: Example Designs / Available Example Designs
Display Name
Select design
Table 260.
Identifier
Description
EX_DESIGN_GUI_QDR
2_SEL_DESIGN
Specifies the creation of a full Quartus Prime project,
instantiating an external memory interface and an example
traffic generator, according to your parameterization. After
the design is created, you can specify the target device and
pin location assignments, run a full compilation, verify
timing closure, and test the interface on your board using
the programming file created by the Quartus Prime
assembler. The 'Generate Example Design' button lets you
generate simulation or synthesis file sets.
Group: Example Designs / Example Design Files
Display Name
Identifier
Description
Simulation
EX_DESIGN_GUI_QDR
2_GEN_SIM
Specifies that the 'Generate Example Design' button create
all necessary file sets for simulation. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, simulation file sets are not created.
Instead, the output directory will contain the ed_sim.qsys
file which holds Qsys details of the simulation example
design, and a make_sim_design.tcl file with other
corresponding tcl files. You can run make_sim_design.tcl
from a command line to generate the simulation example
design. The generated example designs for various
simulators are stored in the /sim sub-directory.
Synthesis
EX_DESIGN_GUI_QDR
2_GEN_SYNTH
Specifies that the 'Generate Example Design' button create
all necessary file sets for synthesis. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, synthesis file sets are not created.
Instead, the output directory will contain the ed_synth.qsys
file which holds Qsys details of the synthesis example
design, and a make_qii_design.tcl script with other
corresponding tcl files. You can run make_qii_design.tcl
from a command line to generate the synthesis example
design. The generated example design is stored in the /qii
sub-directory.
Table 261.
Group: Example Designs / Generated HDL Format
Display Name
Simulation HDL format
Identifier
EX_DESIGN_GUI_QDR
2_HDL_FORMAT
External Memory Interface Handbook Volume 2: Design Guidelines
344
Description
This option lets you choose the format of HDL in which
generated simulation files are created.
7 Implementing and Parameterizing Memory IP
Table 262.
Group: Example Designs / Target Development Kit
Display Name
Select board
Identifier
Description
EX_DESIGN_GUI_QDR
2_TARGET_DEV_KIT
Specifies that when you select a development kit with a
memory module, the generated example design contains all
settings and fixed pin assignments to run on the selected
board. You must select a development kit preset to
generate a working example design for the specified
development kit. Any IP settings not applied directly from a
development kit preset will not have guaranteed results
when testing the development kit. To exclude hardware
support of the example design, select 'none' from the
'Select board' pull down menu. When you apply a
development kit preset, all IP parameters are automatically
set appropriately to match the selected preset. If you want
to save your current settings, you should do so before you
apply the preset. You can save your settings under a
different name using File->Save as.
7.4.7.9 About Memory Presets
Presets help simplify the process of copying memory parameter values from memory
device data sheets to the EMIF parameter editor.
For DDRx protocols, the memory presets are named using the following convention:
PROTOCOL-SPEEDBIN LATENCY FORMAT-AND-TOPOLOGY CAPACITY (INTERNAL-ORGANIZATION)
For example, the preset named DDR4-2666U CL18 Component 1CS 2Gb (512Mb
x 4) refers to a DDR4 x4 component rated at the DDR4-2666U JEDEC speed bin, with
nominal CAS latency of 18 cycles, one chip-select, and a total memory space of 2Gb.
The JEDEC memory specification defines multiple speed bins for a given frequency
(that is, DDR4-2666U and DDR4-2666V). You may be able to determine the exact
speed bin implemented by your memory device using its nominal latency. When in
doubt, contact your memory vendor.
For RLDRAMx and QDRx protocols, the memory presets are named based on the
vendor's device part number.
When the preset list does not contain the exact configuration required, you can still
minimize data entry by selecting the preset closest to your configuration and then
modify parameters as required.
Prior to production you should always review the parameter values to ensure that they
match your memory device data sheet, regardless of whether a preset is used or not.
Incorrect memory parameters can cause functional failures.
7.4.8 Arria 10 EMIF IP RLDRAM 3 Parameters
The Arria 10 EMIF IP parameter editor allows you to parameterize settings for the
Arria 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
External Memory Interface Handbook Volume 2: Design Guidelines
345
7 Implementing and Parameterizing Memory IP
The following tables describe the parameterization settings available in the parameter
editor for the Arria 10 EMIF IP.
7.4.8.1 Arria 10 EMIF IP RLDRAM 3 Parameters: General
Table 263.
Group: General / FPGA
Display Name
Speed grade
Table 264.
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
Group: General / Interface
Display Name
Identifier
Description
Configuration
PHY_CONFIG_ENUM
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
Table 265.
Group: General / Clocks
Display Name
Identifier
Description
Core clocks sharing
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
346
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Use recommended PLL
reference clock frequency
PHY_RLD3_DEFAULT_
REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
Table 266.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
Description
PLL_USER_NUM_OF_E
XTRA_CLKS
Specifies the number of additional output clocks to create
from the PLL.
7.4.8.2 Arria 10 EMIF IP RLDRAM 3 Parameters: Memory
Table 267.
Group: Memory / Topology
Display Name
Identifier
Description
Address width
MEM_RLD3_ADDR_WI
DTH
Number of address pins.
CS# width
MEM_RLD3_CS_WIDT
H
Number of chip selects of the memory interface.
Enable depth expansion using
twin die package
MEM_RLD3_DEPTH_E
XPANDED
Indicates whether to combine two RLDRAM3 devices to
double the address space, resulting in more density.
DK width
MEM_RLD3_DK_WIDT
H
Number of DK clock pairs of the memory interface. This is
equal to the number of write data groups, and is
automatically calculated based on the DQ width per device
and whether width expansion is enabled.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
347
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
DQ width per device
MEM_RLD3_DQ_PER_
DEVICE
Specifies number of DQ pins per RLDRAM3 device and
number of DQ pins per port per QDR IV device. Available
widths for DQ are x18 and x36.
DQ width
MEM_RLD3_DQ_WIDT
H
Number of data pins of the memory interface. Automatically
calculated based on the DQ width per device and whether
width expansion is enabled.
QK width
MEM_RLD3_QK_WIDT
H
Number of QK output clock pairs of the memory interface.
This is equal to the number of read data groups, and is
automatically calculated based on the DQ width per device
and whether width expansion is enabled.
Enable width expansion
MEM_RLD3_WIDTH_E
XPANDED
Indicates whether to combine two memory devices to
double the data bus width. With two devices, the interface
supports a width expansion configuration up to 72-bits. For
width expansion configuration, the address and control
signals are routed to 2 devices.
Table 268.
Group: Memory / Mode Register Settings
Display Name
Identifier
Description
AREF protocol
MEM_RLD3_AREF_PR
OTOCOL_ENUM
Determines the mode register setting that controls the AREF
protocol setting. The AUTO REFRESH (AREF) protocol is
selected by setting mode register 1. There are two ways in
which AREF commands can be issued to the RLDRAM, the
memory controller can either issue bank address-controlled
or multibank AREF commands. Multibank refresh protocol
allows for the simultaneous refreshing of a row in up to four
banks
Data Latency
MEM_RLD3_DATA_LAT
ENCY_MODE_ENUM
Determines the mode register setting that controls the data
latency. Sets both READ and WRITE latency (RL and WL).
ODT
MEM_RLD3_ODT_MOD
E_ENUM
Determines the mode register setting that controls the ODT
setting.
Output drive
MEM_RLD3_OUTPUT_
DRIVE_MODE_ENUM
Determines the mode register setting that controls the
output drive setting.
tRC
MEM_RLD3_T_RC_MO
DE_ENUM
Determines the mode register setting that controls the
tRC(activate to activate timing parameter). Refer to the tRC
table in the memory vendor data sheet. Set the tRC
according to the memory speed grade and data latency. Full
name of tRC
Write protocol
MEM_RLD3_WRITE_P
ROTOCOL_ENUM
Determines the mode register setting that controls the write
protocol setting. When multiple bank (dual bank or quad
bank) is selected, identical data is written to multiple banks.
7.4.8.3 Arria 10 EMIF IP RLDRAM 3 Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
Table 269.
Group: FPGA IO / FPGA IO Settings
Display Name
Use default I/O settings
Identifier
PHY_RLD3_DEFAULT_I
O
Description
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
348
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_RLD3_IO_VOLTA
GE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
Table 270.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_RLD3_USER_AC_
IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_RLD3_USER_AC_
MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_RLD3_USER_AC_
SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 271.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_RLD3_USER_CK_
IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_RLD3_USER_CK_
MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_RLD3_USER_CK_
SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
External Memory Interface Handbook Volume 2: Design Guidelines
349
7 Implementing and Parameterizing Memory IP
Table 272.
Group: FPGA IO / Data Bus
Display Name
Identifier
Use recommended initial Vrefin
PHY_RLD3_USER_AUT
O_STARTING_VREFIN
_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_RLD3_USER_DAT
A_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_RLD3_USER_DAT
A_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_RLD3_USER_DAT
A_OUT_MODE_ENUM
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_RLD3_USER_STA
RTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 273.
Description
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_RLD3_USER_PLL
_REF_CLK_IO_STD_E
NUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_RLD3_USER_RZQ
_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
7.4.8.4 Arria 10 EMIF IP RLDRAM 3 Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 274.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Speed bin
MEM_RLD3_SPEEDBIN
_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
tCKDK_max
MEM_RLD3_TCKDK_M
AX_CYC
tCKDK_max refers to the maximum skew from the memory
clock (CK) to the write strobe (DK).
tCKDK_min
MEM_RLD3_TCKDK_M
IN_CYC
tCKDK_min refers to the minimum skew from the memory
clock (CK) to the write strobe (DK).
tCKQK_max
MEM_RLD3_TCKQK_M
AX_PS
tCKQK_max refers to the maximum skew from the memory
clock (CK) to the read strobe (QK).
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
350
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tDH (base) DC level
MEM_RLD3_TDH_DC_
MV
tDH (base) DC level refers to the voltage level which the
data bus must not cross during the hold window. The signal
is considered stable only if it remains above this voltage
level (for a logic 1) or below this voltage level (for a logic 0)
for the entire hold period.
tDH (base)
MEM_RLD3_TDH_PS
tDH (base) refers to the hold time for the Data (DQ) bus
after the rising edge of CK.
tDS (base) AC level
MEM_RLD3_TDS_AC_
MV
tDS (base) AC level refers to the voltage level which the
data bus must cross and remain above during the setup
margin window. The signal is considered stable only if it
remains above this voltage level (for a logic 1) or below this
voltage level (for a logic 0) for the entire setup period.
tDS (base)
MEM_RLD3_TDS_PS
tDS(base) refers to the setup time for the Data (DQ) bus
before the rising edge of the DQS strobe.
tIH (base) DC level
MEM_RLD3_TIH_DC_
MV
tIH (base) DC level refers to the voltage level which the
address/command signal must not cross during the hold
window. The signal is considered stable only if it remains
above this voltage level (for a logic 1) or below this voltage
level (for a logic 0) for the entire hold period.
tIH (base)
MEM_RLD3_TIH_PS
tIH (base) refers to the hold time for the Address/Command
(A) bus after the rising edge of CK. Depending on what AC
level the user has chosen for a design, the hold margin can
vary (this variance will be automatically determined when
the user choses the "tIH (base) AC level").
tIS (base) AC level
MEM_RLD3_TIS_AC_M
V
tIS (base) AC level refers to the voltage level which the
address/command signal must cross and remain above
during the setup margin window. The signal is considered
stable only if it remains above this voltage level (for a logic
1) or below this voltage level (for a logic 0) for the entire
setup period.
tIS (base)
MEM_RLD3_TIS_PS
tIS (base) refers to the setup time for the Address/
Command/Control (A) bus to the rising edge of CK.
tQH
MEM_RLD3_TQH_CYC
tQH specifies the output hold time for the DQ/DINV in
relation to QK.
tQKQ_max
MEM_RLD3_TQKQ_MA
X_PS
tQKQ_max describes the maximum skew between the read
strobe (QK) clock edge to the data bus (DQ/DINV) edge.
7.4.8.5 Arria 10 EMIF IP RLDRAM 3 Parameters: Board
Table 275.
Group: Board / Intersymbol Interference/Crosstalk
Display Name
Identifier
Description
Address and command ISI/
crosstalk
BOARD_RLD3_USER_
AC_ISI_NS
The address and command window reduction due to ISI and
crosstalk effects. The number to be entered is the total loss
of margin on both the setup and hold sides (measured loss
on the setup side + measured loss on the hold side). Refer
to the EMIF Simulation Guidance wiki page for additional
information.
QK/QK# ISI/crosstalk
BOARD_RLD3_USER_
RCLK_ISI_NS
QK/QK# ISI/crosstalk describes the reduction of the read
data window due to intersymbol interference and crosstalk
effects on the QK/QK# signal when driven by the memory
device during a read. The number to be entered in the
Quartus Prime software is the total of the measured loss of
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
351
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
margin on the setup side plus the measured loss of margin
on the hold side. Refer to the EMIF Simulation Guidance
wiki page for additional information.
DK/DK# ISI/crosstalk
BOARD_RLD3_USER_
WCLK_ISI_NS
DK/DK# ISI/crosstalk describes the reduction of the write
data window due to intersymbol interference and crosstalk
effects on the DK/DK# signal when driven by the FPGA
during a write. The number to be entered in the Quartus
Prime software is the total of the measured loss of margin
on the setup side plus the measured loss of margin on the
hold side. Refer to the EMIF Simulation Guidance wiki page
for additional information.
Write DQ ISI/crosstalk
BOARD_RLD3_USER_
WDATA_ISI_NS
The reduction of the write data window due to ISI and
crosstalk effects on the DQ signal when driven by the FPGA
during a write. The number to be entered is the total of the
measured loss of margin on the setup side plus the
measured loss of margin on the hold side. Refer to the EMIF
Simulation Guidance wiki page for additional information.
Use default ISI/crosstalk
values
BOARD_RLD3_USE_D
EFAULT_ISI_VALUES
You can enable this option to use default intersymbol
interference and crosstalk values for your topology. Note
that the default values are not optimized for your board. For
optimal signal integrity, it is recommended that you do not
enable this parameter, but instead perform I/O simulation
using IBIS models and Hyperlynx*, and manually enter
values based on your simulation results, instead of using
the default values.
Table 276.
Group: Board / Board and Package Skews
Display Name
Identifier
Description
Average delay difference
between address/command
and CK
BOARD_RLD3_AC_TO
_CK_SKEW_NS
The average delay difference between the address/
command signals and the CK signal, calculated by averaging
the longest and smallest address/command signal trace
delay minus the maximum CK trace delay. Positive values
represent address and command signals that are longer
than CK signals and negative values represent address and
command signals that are shorter than CK signals.
Maximum board skew within
QK group
BOARD_RLD3_BRD_S
KEW_WITHIN_QK_NS
Maximum board skew within QK group refers to the largest
skew between all DQ and DM pins in a QK group. This value
can affect the read capture and write margins.
Average delay difference
between DK and CK
BOARD_RLD3_DK_TO
_CK_SKEW_NS
This parameter describes the average delay difference
between the DK signals and the CK signal, calculated by
averaging the longest and smallest DK trace delay minus
the CK trace delay. Positive values represent DK signals that
are longer than CK signals and negative values represent
DK signals that are shorter than CK signals.
Package deskewed with board
layout (address/command
bus)
BOARD_RLD3_IS_SKE
W_WITHIN_AC_DESK
EWED
Enable this parameter if you are compensating for package
skew on the address, command, control, and memory clock
buses in the board layout. Include package skew in
calculating the following board skew parameters.
Package deskewed with board
layout (QK group)
BOARD_RLD3_IS_SKE
W_WITHIN_QK_DESK
EWED
If you are compensating for package skew on the QK bus in
the board layout (hence checking the box here), please
include package skew in calculating the following board
skew parameters.
Maximum CK delay to device
BOARD_RLD3_MAX_C
K_DELAY_NS
The maximum CK delay to device refers to the delay of the
longest CK trace from the FPGA to any device.
Maximum DK delay to device
BOARD_RLD3_MAX_D
K_DELAY_NS
The maximum DK delay to device refers to the delay of the
longest DK trace from the FPGA to any device.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
352
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Maximum system skew within
address/command bus
BOARD_RLD3_PKG_B
RD_SKEW_WITHIN_A
C_NS
Maximum system skew within address/command bus refers
to the largest skew between the address and command
signals.
Maximum delay difference
between devices
BOARD_RLD3_SKEW_
BETWEEN_DIMMS_NS
This parameter describes the largest propagation delay on
the DQ signals between ranks. For example, in a two-rank
configuration where devices are placed in series, there is an
extra propagation delay for DQ signals going to and coming
back from the furthest device compared to the nearest
device. This parameter is only applicable when there is
more than one rank.
Maximum skew between DK
groups
BOARD_RLD3_SKEW_
BETWEEN_DK_NS
This parameter describes the largest skew between DK
signals in different DK groups.
7.4.8.6 Arria 10 EMIF IP RLDRAM 3 Parameters: Diagnostics
Table 277.
Group: Diagnostics / Simulation Options
Display Name
Identifier
Abstract phy for fast simulation
DIAG_RLD3_ABSTRAC
T_PHY
Specifies that the system use Abstract PHY for simulation.
Abstract PHY replaces the PHY with a model for fast
simulation and can reduce simulation time by 2-3 times.
Abstract PHY is available for certain protocols and device
families, and only when you select Skip Calibration.
Calibration mode
DIAG_SIM_CAL_MODE
_ENUM
Specifies whether to skip memory interface calibration
during simulation, or to simulate the full calibration process.
Simulating the full calibration process can take hours (or
even days), depending on the width and depth of the
memory interface. You can achieve much faster simulation
times by skipping the calibration process, but that is only
expected to work when the memory model is ideal and the
interconnect delays are zero. If you enable this parameter,
the interface still performs some memory initialization
before starting normal operations. Abstract PHY is
supported with skip calibration.
Table 278.
Description
Group: Diagnostics / Calibration Debug Options
Display Name
Identifier
Description
Enable Daisy-Chaining for
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_MASTER
Specifies that the IP export an Avalon-MM master interface
(cal_debug_out) which can connect to the cal_debug
interface of other EMIF cores residing in the same I/O
column. This parameter applies only if the EMIF Debug
Toolkit or On-Chip Debug Port is enabled. Refer to the
Debugging Multiple EMIFs wiki page for more information
about debugging multiple EMIFs.
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_SLAVE
Specifies the connectivity of an Avalon slave interface for
use by the Quartus Prime EMIF Debug Toolkit or user core
logic. If you set this parameter to "Disabled," no debug
features are enabled. If you set this parameter to "Export,"
an Avalon slave interface named "cal_debug" is exported
from the IP. To use this interface with the EMIF Debug
Toolkit, you must instantiate and connect an EMIF debug
interface IP core to it, or connect it to the cal_debug_out
interface of another EMIF core. If you select "Add EMIF
Debug Interface", an EMIF debug interface component
containing a JTAG Avalon Master is connected to the debug
port, allowing the core to be accessed by the EMIF Debug
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
353
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Toolkit. Only one EMIF debug interface should be
instantiated per I/O column. You can chain additional EMIF
or PHYLite cores to the first by enabling the "Enable DaisyChaining for Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option for all cores in the chain, and selecting
"Export" for the "Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option on all cores after the first.
Interface ID
DIAG_INTERFACE_ID
Identifies interfaces within the I/O column, for use by the
EMIF Debug Toolkit and the On-Chip Debug Port. Interface
IDs should be unique among EMIF cores within the same
I/O column. If the Quartus Prime EMIF Debug Toolkit/OnChip Debug Port parameter is set to Disabled, the interface
ID is unused.
Use Soft NIOS Processor for
On-Chip Debug
DIAG_SOFT_NIOS_M
ODE
Enables a soft Nios processor as a peripheral component to
access the On-Chip Debug Port. Only one interface in a
column can activate this option.
Table 279.
Group: Diagnostics / Example Design
Display Name
Identifier
Description
Enable In-System-Sourcesand-Probes
DIAG_EX_DESIGN_IS
SP_EN
Enables In-System-Sources-and-Probes in the example
design for common debug signals, such as calibration status
or example traffic generator per-bit status. This parameter
must be enabled if you want to do driver margining.
Number of core clocks sharing
slaves to instantiate in the
example design
DIAG_EX_DESIGN_NU
M_OF_SLAVES
Specifies the number of core clock sharing slaves to
instantiate in the example design. This parameter applies
only if you set the "Core clocks sharing" parameter in the
"General" tab to Master or Slave.
Table 280.
Group: Diagnostics / Traffic Generator
Display Name
Identifier
Description
Bypass the default traffic
pattern
DIAG_BYPASS_DEFAU
LT_PATTERN
Specifies that the controller/interface bypass the traffic
generator 2.0 default pattern after reset. If you do not
enable this parameter, the traffic generator does not assert
a pass or fail status until the generator is configured and
signaled to start by its Avalon configuration interface.
Bypass the traffic generator
repeated-writes/repeatedreads test pattern
DIAG_BYPASS_REPEA
T_STAGE
Specifies that the controller/interface bypass the traffic
generator's repeat test stage. If you do not enable this
parameter, every write and read is repeated several times.
Bypass the traffic generator
stress pattern
DIAG_BYPASS_STRES
S_STAGE
Specifies that the controller/interface bypass the traffic
generator's stress pattern stage. (Stress patterns are meant
to create worst-case signal integrity patterns on the data
pins.) If you do not enable this parameter, the traffic
generator does not assert a pass or fail status until the
generator is configured and signaled to start by its Avalon
configuration interface.
Bypass the user-configured
traffic stage
DIAG_BYPASS_USER_
STAGE
Specifies that the controller/interface bypass the userconfigured traffic generator's pattern after reset. If you do
not enable this parameter, the traffic generator does not
assert a pass or fail status until the generator is configured
and signaled to start by its Avalon configuration interface.
Configuration can be done by connecting to the traffic
generator via the EMIF Debug Toolkit, or by using custom
logic connected to the Avalon-MM configuration slave port
on the traffic generator. Configuration can also be simulated
using the example testbench provided in the
altera_emif_avl_tg_2_tb.sv file.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
354
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Run diagnostic on infinite test
duration
DIAG_INFI_TG2_ERR_
TEST
Specifies that the traffic generator run indefinitely until the
first error is detected.
Export Traffic Generator 2.0
configuration interface
DIAG_TG_AVL_2_EXP
ORT_CFG_INTERFACE
Specifies that the IP export an Avalon-MM slave port for
configuring the Traffic Generator. This is required only if you
are configuring the traffic generator through user logic and
not through through the EMIF Debug Toolkit.
Use configurable Avalon traffic
generator 2.0
DIAG_USE_TG_AVL_2
This option allows users to add the new configurable Avalon
traffic generator to the example design.
Table 281.
Group: Diagnostics / Performance
Display Name
Enable Efficiency Monitor
Table 282.
Identifier
Description
DIAG_EFFICIENCY_M
ONITOR
Adds an Efficiency Monitor component to the Avalon-MM
interface of the memory controller, allowing you to view
efficiency statistics of the interface. You can access the
efficiency statistics using the EMIF Debug Toolkit.
Group: Diagnostics / Miscellaneous
Display Name
Identifier
Description
Use short Qsys interface names
SHORT_QSYS_INTERF
ACE_NAMES
Specifies the use of short interface names, for improved
usability and consistency with other Qsys components. If
this parameter is disabled, the names of Qsys interfaces
exposed by the IP will include the type and direction of the
interface. Long interface names are supported for
backward-compatibility and will be removed in a future
release.
7.4.8.7 Arria 10 EMIF IP RLDRAM 3 Parameters: Example Designs
Table 283.
Group: Example Designs / Available Example Designs
Display Name
Select design
Table 284.
Identifier
Description
EX_DESIGN_GUI_RLD
3_SEL_DESIGN
Specifies the creation of a full Quartus Prime project,
instantiating an external memory interface and an example
traffic generator, according to your parameterization. After
the design is created, you can specify the target device and
pin location assignments, run a full compilation, verify
timing closure, and test the interface on your board using
the programming file created by the Quartus Prime
assembler. The 'Generate Example Design' button lets you
generate simulation or synthesis file sets.
Group: Example Designs / Example Design Files
Display Name
Simulation
Identifier
Description
EX_DESIGN_GUI_RLD
3_GEN_SIM
Specifies that the 'Generate Example Design' button create
all necessary file sets for simulation. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, simulation file sets are not created.
Instead, the output directory will contain the ed_sim.qsys
file which holds Qsys details of the simulation example
design, and a make_sim_design.tcl file with other
corresponding tcl files. You can run make_sim_design.tcl
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
355
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
from a command line to generate the simulation example
design. The generated example designs for various
simulators are stored in the /sim sub-directory.
Synthesis
Table 285.
EX_DESIGN_GUI_RLD
3_GEN_SYNTH
Group: Example Designs / Generated HDL Format
Display Name
Simulation HDL format
Table 286.
Specifies that the 'Generate Example Design' button create
all necessary file sets for synthesis. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, synthesis file sets are not created.
Instead, the output directory will contain the ed_synth.qsys
file which holds Qsys details of the synthesis example
design, and a make_qii_design.tcl script with other
corresponding tcl files. You can run make_qii_design.tcl
from a command line to generate the synthesis example
design. The generated example design is stored in the /qii
sub-directory.
Identifier
EX_DESIGN_GUI_RLD
3_HDL_FORMAT
Description
This option lets you choose the format of HDL in which
generated simulation files are created.
Group: Example Designs / Target Development Kit
Display Name
Select board
Identifier
Description
EX_DESIGN_GUI_RLD
3_TARGET_DEV_KIT
Specifies that when you select a development kit with a
memory module, the generated example design contains all
settings and fixed pin assignments to run on the selected
board. You must select a development kit preset to
generate a working example design for the specified
development kit. Any IP settings not applied directly from a
development kit preset will not have guaranteed results
when testing the development kit. To exclude hardware
support of the example design, select 'none' from the
'Select board' pull down menu. When you apply a
development kit preset, all IP parameters are automatically
set appropriately to match the selected preset. If you want
to save your current settings, you should do so before you
apply the preset. You can save your settings under a
different name using File->Save as.
7.4.8.8 About Memory Presets
Presets help simplify the process of copying memory parameter values from memory
device data sheets to the EMIF parameter editor.
For DDRx protocols, the memory presets are named using the following convention:
PROTOCOL-SPEEDBIN LATENCY FORMAT-AND-TOPOLOGY CAPACITY (INTERNAL-ORGANIZATION)
For example, the preset named DDR4-2666U CL18 Component 1CS 2Gb (512Mb
x 4) refers to a DDR4 x4 component rated at the DDR4-2666U JEDEC speed bin, with
nominal CAS latency of 18 cycles, one chip-select, and a total memory space of 2Gb.
The JEDEC memory specification defines multiple speed bins for a given frequency
(that is, DDR4-2666U and DDR4-2666V). You may be able to determine the exact
speed bin implemented by your memory device using its nominal latency. When in
doubt, contact your memory vendor.
External Memory Interface Handbook Volume 2: Design Guidelines
356
7 Implementing and Parameterizing Memory IP
For RLDRAMx and QDRx protocols, the memory presets are named based on the
vendor's device part number.
When the preset list does not contain the exact configuration required, you can still
minimize data entry by selecting the preset closest to your configuration and then
modify parameters as required.
Prior to production you should always review the parameter values to ensure that they
match your memory device data sheet, regardless of whether a preset is used or not.
Incorrect memory parameters can cause functional failures.
7.4.9 Equations for Arria 10 EMIF IP Board Skew Parameters
The following topics illustrate the underlying equations for the board skew parameters
for each supported memory protocol.
7.4.9.1 Equations for DDR3/DDR4/LPDDR3 Board Skew Parameters
Table 287.
Parameter Equations
Parameter
Maximum CK delay to DIMM/device
Description/Equation
The delay of the longest CK trace from the FPGA to any
DIMM/device.
Where n is the number of memory clock and r is the
number rank of DIMM/device. For example in dual-rank
DIMM implementation, if there are 2 pairs of memory clocks
in each rank DIMM, the maximum CK delay is expressed by
the following equation:
Maximum DQS delay to DIMM/device
The delay of the longest DQS trace from the FPGA to the
DIMM/device.
Where n is the number of DQS and r isthe number of rank
of DIMM/device. For example in dual-rank DIMM
implementation, if there are 2 DQS in each rank DIMM, the
maximum DQS delay is expressed by the following
equation:
Average delay difference between DQS and CK
The average delay difference between the DQS signals and
the CK signal, calculated by averaging the longest and
smallest DQS delay minus the CK delay. Positive values
represent DQS signals that are longer than CK signals and
negative values represent DQS signals that are shorter than
CK signals. The Quartus Prime software uses this skew to
optimize the delay of the DQS signals for appropriate setup
and hold margins.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
357
7 Implementing and Parameterizing Memory IP
Parameter
Description/Equation
Where n is the number of memory clock, m is the number
of DQS, and r is the number of rank of DIMM/device.
When using discete components, the calculation differs
slightly. Find the minimum and maximum values for (DQSCK) over all groups and then divide by 2. Calculate the
(DQS-CK) for each DQS group, by using the appropriate
CLK for that group.
For example, in a configuration with 5 x16 components,
with each component having two DQS groups: To find the
minimum and maximum, calculate the minimum and
maximum of (DQS0 – CK0, DQS1 – CK0, DQS2 –CK1, DQS3
– CK1, and so forth) and then divide the result by 2.
Maximum Board skew within DQS group
The largest skew between all DQ and DM pins in a DQS
group. Enter your board skew only. Package skew is
calculated automatically, based on the memory interface
configuration, and added to this value. This value affects the
read capture and write margins.
Maximum skew between DQS groups
The largest skew between DQS signals in different DQS
groups.
Maximum system skew within address/command bus
The largest skew between the address and command
signals. Enter combined board and package skew. In the
case of a component, find the maximum address/command
and minimum address/command values across all
component address signals.
Average delay difference between address/command and
CK
A value equal to the average of the longest and smallest
address/command signal delays, minus the delay of the CK
signal. The value can be positive or negative.
The average delay difference between the address/
command and CK is expressed by the following equation:
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
358
7 Implementing and Parameterizing Memory IP
Parameter
Description/Equation
where n is the number of memory clocks.
Maximum delay difference between DIMMs/devices
The largest propagation delay on DQ signals betweek ranks.
For example, in a two-rank configuration where you place
DIMMs in different slots there is also a propagation delay for
DQ signals going to and coming back from the furthest
DIMM compared to the nearest DIMM. This parameter is
applicable only when there is more than one rank.Maxr
{ max n,m [(DQn_r path delay– DQn_r+1 path
delay), (DQSm_r path delay– DQSm_r+1 path
delay)]}
Where n is the number of DQ, m is the number of DQS and
r is number of rank of DIMM/device .
7.4.9.2 Equations for QDR-IV Board Skew Parameters
Table 288.
Parameter Equations
Parameter
Description/Equation
Maximum system skew within address/command bus
The largest skew between the address and command
signals. Enter combined board and package skew.
Average delay difference between address/command and
CK
The average delay difference between the address and
command signals and the CK signal, calculated by averaging
the longest and smallest Address/Command signal delay
minus the CK delay. Positive values represent address and
command signals that are longer than CK signals and
negative values represent address and command signals
that are shorter than CK signals. The Quartus Prime
software uses this skew to optimize the delay of the address
and command signals to have appropriate setup and hold
margins.
Maximum System skew within QK group
The largest skew between all DQ and DM pins in a QK
group. Enter combined board and package skew. This value
affects the read capture and write margins.
Where n includes both DQa and DQb
Maximum CK delay to device
The delay of the longest CK trace from the FPGA to any
device.
where n is the number of memory clocks.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
359
7 Implementing and Parameterizing Memory IP
Parameter
Maximum DK delay to device
Description/Equation
The delay of the longest DK trace from the FPGA to any
device.
where n is the number of DK.
Average delay difference between DK and CK
The average delay difference between the DK signals and
the CK signal, calculated by averaging the longest and
smallest DK delay minus the CK delay. Positive values
represent DK signals that are longer than CK signals and
negative values represent DK signals that are shorter than
CK signals. The Quartus Prime software uses this skew to
optimize the delay of the DK signals to have appropriate
setup and hold margins.
CDO:/content/authoring/rto1474984235656.xmlwhere n is
the number of memory clocks and m is the number of DK.
Maximum skew between DK groups
The largest skew between DK signals in different DK groups.
where n is the number of DK. Where n includes both DQa
and DQb.
7.4.9.3 Equations for QDRII, QDRII+, and QDRII+ Xtreme Board Skew
Parameters
Table 289.
Parameter Equations
Parameter
Description/Equation
Maximum system skew within address/command bus
The largest skew between the address and command
signals. Enter combined board and package skew.
Average delay difference between address/command and K
The average delay difference between the address and
command signals and the K signal, calculated by averaging
the longest and smallest Address/Command signal delay
minus the K delay. Positive values represent address and
command signals that are longer than K signals and
negative values represent address and command signals
that are shorter than K signals. The Quartus Prime software
uses this skew to optimize the delay of the address and
command signals to have appropriate setup and hold
margins.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
360
7 Implementing and Parameterizing Memory IP
Parameter
Description/Equation
where n is the number of K clocks.
Maximum board skew within Q group
The largest skew between all Q pins in a Q group. Enter
your board skew only. Package skew is calculated
automatically, based on the memory interface configuration,
and added to this value. This value affects the read capture
and write margins.
where g is the number of Q group.
Maximum board skew within D group
The largest skew between all D and BWS# pins in a D
group. Enter your board skew only. Package skew is
calculated automatically, based on the memory interface
configuration, and added to this value. This value affects the
read capture and write margins.
where g is the number of D group.
Maximum K delay to device
where n is the number of K
clocks.
7.4.9.4 Equations for RLDRAM 3 Board Skew Parameters
Table 290.
Parameter Equations
Parameter
Maximum CK delay to device
Description/Equation
The delay of the longest CK trace from the FPGA to any
device.
where n is the number of memory clocks. For example, the
maximum CK delay for two pairs of memory clocks is
expressed by the following equation:
Maximum DK delay to device
The delay of the longest DK trace from the FPGA to any
device.
where n is the number of DK. For example, the maximum
DK delay for two DK is expressed by the following equation:
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
361
7 Implementing and Parameterizing Memory IP
Parameter
Average delay difference between DK and CK
Description/Equation
The average delay difference between the DK signals and
the CK signal, calculated by averaging the longest and
smallest DK delay minus the CK delay. Positive values
represent DK signals that are longer than CK signals and
negative values represent DK signals that are shorter than
CK signals. The Quartus Prime software uses this skew to
optimize the delay of the DK signals to have appropriate
setup and hold margins.
where n is the number of memory clocks and m is the
number of DK.
Maximum system skew within address/command bus
The largest skew between the address and command
signals. Enter combined board and package skew.
Average delay difference between address/command and
CK
The average delay difference between the address and
command signals and the CK signal, calculated by averaging
the longest and smallest Address/Command signal delay
minus the CK delay. Positive values represent address and
command signals that are longer than CK signals and
negative values represent address and command signals
that are shorter than CK signals. The Quartus Prime
software uses this skew to optimize the delay of the address
and command signals to have appropriate setup and hold
margins.
Maximum board skew within QK group
The largest skew between all DQ and DM pins in a QK
group. Enter your board skew only. Package skew will be
calculated automatically, based on the memory interface
configuration, and added to this value. This value affects the
read capture and write margins.
where n is the number of DQ.
Maximum skew between DK groups
The largest skew between DK signals in different DK groups.
where n is the number of DQ.
External Memory Interface Handbook Volume 2: Design Guidelines
362
7 Implementing and Parameterizing Memory IP
7.5 Intel Stratix 10 External Memory Interface IP
This section contains information about parameterizing Intel Stratix 10 External
Memory Interface IP.
7.5.1 Qsys Interfaces
The interfaces in the Stratix 10 External Memory Interface IP each have signals that
can be connected in Qsys. The following tables list the signals available for each
interface and provide a description and guidance on how to connect those interfaces.
Stratix 10 External Memory Interface IP Interfaces
Table 291.
Interface: afi_clk_conduit_end
Interface type: Conduit
Signals in Interface
afi_clk
Direction
Output
Availability
•
•
Table 292.
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
Description
The Altera PHY Interface (AFI)
clock output signal. The clock
frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the (clock
input) conduit of the custom AFIbased memory controller
connected to the
afi_conduit_end or any user
logic block that requires the
generated clock frequency.
Interface: afi_conduit_end
Interface type: Conduit
Signals in Interface
Direction
Availability
afi_cal_success
Output
•
afi_cal_fail
Output
•
afi_cal_req
Input
afi_rlat
Output
afi_wlat
Output
afi_addr
Input
afi_rst_n
Input
afi_wdata_valid
Input
afi_wdata
Input
afi_rdata_en_full
Input
afi_rdata
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
Description
The Altera PHY Interface (AFI)
signals between the external
memory interface IP and the
custom AFI-based memory
controller.
Connect this interface to the AFI
conduit of the custom AFI-based
memory controller.
Output
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
363
7 Implementing and Parameterizing Memory IP
Signals in Interface
afi_rdata_valid
Direction
Availability
Description
Output
afi_rrank
Input
afi_wrank
Input
afi_ba
Input
•
•
DDR3, DDR4, RLDRAM 3
Hard PHY only
afi_cs_n
Input
•
•
DDR3, DDR4, LPDDR3,
RLDRAM 3
Hard PHY only
•
•
DDR3, DDR4, LPDDR3
Hard PHY only
•
•
QDR IV
Hard PHY only
•
•
QDR IV
Hard PHY only
The Altera PHY Interface (AFI)
signals between the external
memory interface IP and the
custom AFI-based memory
controller.
Connect this interface to the AFI
conduit of the custom AFI-based
memory controller.
The Altera PHY Interface (AFI)
signals between the external
memory interface IP and the
custom AFI-based memory
controller.
Connect this interface to the AFI
conduit of the custom AFI-based
memory controller.
For more information, refer to the
AFI 4.0 Specification.
afi_cke
Input
afi_odt
Input
afi_dqs_burst
Input
afi_ap
Input
afi_pe_n
Output
afi_ainv
Input
afi_ld_n
Input
afi_rw_n
Input
afi_cfg_n
Input
afi_lbk0_n
Input
afi_lbk1_n
Input
afi_rdata_dinv
Output
afi_wdata_dinv
Input
afi_we_n
Input
•
•
DDR3, RLDRAM 3
Hard PHY only
afi_dm
Input
•
•
•
DDR3, LPDDR3, RLDRAM 3
Hard PHY only
Enable DM pins=True
afi_ras_n
Input
afi_cas_n
Input
•
•
DDR3
Hard PHY only
afi_rm
Input
•
•
•
DDR3
Hard PHY only
LRDIMM with Number of
rank multiplication pins >
0
afi_par
Input
•
•
•
DDR3
Hard PHY only
RDIMM/LRDIMM
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
364
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
•
•
•
DDR4
Hard PHY only
Enable alert_n/par pins =
True
•
•
DDR4
Hard PHY only
afi_bg
Input
afi_act_n
Input
afi_dm_n
Input
•
•
•
DDR4
Hard PHY only
Enable DM pins=True
afi_ref_n
Input
•
•
RLDRAM 3
Hard PHY only
Table 293.
Description
Interface: afi_half_clk_conduit_end
Interface type: Conduit
Signals in Interface
afi_half_clk
Direction
Output
Availability
•
•
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
Description
The Altera PHY Interface (AFI) half
clock output signal. The clock runs
at half the frequency of the AFI
clock (afi_clk clock).
Connect this interface to the clock
input conduit of the user logic
block that needs to be clocked at
the generated clock frequency.
Table 294.
Interface: afi_reset_conduit_end
Interface type: Conduit
Signals in Interface
afi_reset_n
Direction
Output
Availability
•
•
Table 295.
DDR3, DDR4, LPDDR3,
RLDRAM 3, QDR IV
Hard PHY only
Description
The Altera PHY Interface (AFI)
reset output signal. Asserted when
the PLL becomes unlocked or
when the PHY is reset.
Asynchronous assertion and
synchronous deassertion.
Connect this interface to the reset
input conduit of the custom AFIbased memory controller
connected to the
afi_conduit_end and all the
user logic blocks that are under
the AFI clock domain afi_clk or
afi_half_clk clock).
Interface: cal_debug_avalon_slave
Interface type: Avalon Memory-Mapped Slave
Signals in Interface
cal_debug_waitrequest
Direction
Output
cal_debug_read
Input
cal_debug_write
Input
cal_debug_addr
Input
Availability
•
•
EMIF Debug Toolkit
On-Chip Debug Port=Export
Description
The Avalon-MM signals between
the external memory interface IP
and the external memory interface
Debug Component.
Connect this interface to the
(to_ioaux) Avalon-MM master of
the Stratix 10 EMIF Debug
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
365
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
cal_debug_read_data
Output
cal_debug_write_data
Input
cal_debug_byteenable
Input
cal_debug_read_data_valid
Table 296.
Availability
Description
Component IP or to
(cal_debug_out_avalon_mast
er) Avalon-MM master of the
other external memory interface
IP that has exported the interface.
If you are not using the Altera
EMIF Debug Toolkit, connect this
interface to the Avalon-MM master
of the custom debug logic.
When in daisy-chaining mode,
ensure one of the connected
Avalon masters is either the
Stratix 10 EMIF Debug
Component IP or the external
memory interface IP with EMIF
Debug Toolkit/On-Chip Debug
Port set to Add EMIF Debug
Interface.
Output
Interface: cal_debug_clk_clock_sink
Interface type: Clock Input
Signals in Interface
cal_debug_clk
Direction
Input
•
Availability
Description
EMIF Debug Toolkit / OnChip Debug Port=Export
The calibration debug clock input
signal.
Connect this interface to the
(avl_clk_out) clock output of
theStratix 10 EMIF Debug
Component IP or to
(cal_debug_out_clk_clock_s
ource) clock input of the other
external memory interface IP,
depending on which IP the
cal_debug_avalon_slave
interface is connecting to. If you
are not using the Altera EMIF
Debug Toolkit, connect this
interface to the clock output of the
custom debug logic.
Table 297.
Interface: cal_debug_out_avalon_master
Interface type: Avalon Memory-Mapped Master
Signals in Interface
cal_debug_out_waitrequest
Direction
Availability
Input
•
cal_debug_out_read
Output
•
cal_debug_out_write
Output
cal_debug_out_addr
Output
cal_debug_out_read_data
Input
cal_debug_out_write_data
Output
cal_debug_out_byteenable
Output
cal_debug_out_read_data_valid
EMIF Debug Toolkit / OnChip Debug Port=Export
Add EMIF Debug Interface with
Enable Daisy-Chaining for
EMIF Debug Toolkit/ OnChip Debug Port=True
Input
External Memory Interface Handbook Volume 2: Design Guidelines
366
Description
The Avalon-MM signals between
the external memory interface IP
and the other external memory
interface IP.
Connect this interface to the
(cal_debug_avalon_slave)
Avalon-MM Master of the external
memory interface IP that has
exported the interface .
7 Implementing and Parameterizing Memory IP
Table 298.
Interface: cal_debug_out_clk_clock_source
Interface type: Clock Output
Signals in Interface
cal_debug_out_clk
Direction
Output
Availability
•
•
EMIF Debug Toolkit / OnChip Debug Port=Export
Add EMIF Debug Interface with
Enable Daisy-Chaining for
EMIF Debug Toolkit/ OnChip Debug Port=True
Description
The calibration debug clock output
signal.
For EMIF Debug Toolkit/OnChip Debug Port=Export with
Enable Daisy-Chaining for EMIF
Debug Toolkit/ On-Chip Debug
Port=True, the clock frequency
follows the cal_debug_clk
frequency. Otherwise, the clock
frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the
(cal_debug_out_reset_reset
_source) clock input of the other
external memory interface IP
where the
cal_debug_avalon_master
interface is being connected to or
to any user logic block that needs
to be clocked at the generated
clock frequency.
Table 299.
Interface: cal_debug_out_reset_reset_source
Interface type: Reset Output
Signals in Interface
cal_debug_out_reset_n
Direction
Output
Availability
•
•
EMIF Debug Toolkit / OnChip Debug Port=Export
Add EMIF Debug Interface with
Enable Daisy-Chaining for
EMIF Debug Toolkit/ OnChip Debug Port=True
Description
The calibration debug reset output
signal. Asynchronous assertion
and synchronous deassertion.
Connect this interface to the
(cal_debug_reset_reset_sin
k) reset input of the other external
memory interface IP where the
cal_debug_avalon_master
interface being connected to and
all the user logic blocks that are
under the calibration debug clock
domain (cal_debug_out_clk
clock reset). If you are not
using the Altera EMIF Debug
Toolkit, connect this interface to
the reset output of the custom
debug logic.
Table 300.
Interface: cal_debug_reset_reset_sink
Interface type: Reset Intput
Signals in Interface
cal_debug_reset_n
Direction
Input
•
Availability
Description
EMIF Debug Toolkit / OnChip Debug Port=Export
The calibration debug reset input
signal. Require asynchronous
assertion and synchronous
deassertion.
Connect this interface to the
(avl_rst_out) reset output of
the Stratix 10 EMIF Debug
Component IP or to
(cal_debug_out_reset_reset
External Memory Interface Handbook Volume 2: Design Guidelines
367
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
Description
_source) clock input of the other
external memory interface IP,
depending on which IP the
cal_debug_avalon_slave
interface is being connected to.
Table 301.
Interface: clks_sharing_master_out_conduit_end
Interface type: Conduit
Signals in Interface
clks_sharing_master_out
Table 302.
Direction
Input
Availability
•
Core clocks sharing=Master
Description
The core clock output signals.
Connect this interface to the
(clks_sharing_slave_in_con
duit_end) conduit of the other
external memory interface IP with
the Core clock sharing set to slave
or other PLL Slave.
Interface: clks_sharing_slave_in_conduit_end
Interface type: Conduit
Signals in Interface
clks_sharing_slave_in
Table 303.
Direction
Input
•
Availability
Description
Core clocks sharing=Slave
The core clock input signals.
Connect this interface to the
(clks_sharing_master_out_c
onduit_end) conduit of the other
external memory interface IP with
the Core clock sharing set to
Master or other PLL Master.
Interface: ctrl_amm_avalon_slave
Interface type: Avalon Memory-Mapped Slave
Signals in Interface
Direction
Availability
amm_ready
Output
•
amm_read
Input
•
amm_write
Input
amm_address
Input
amm_readdata
Output
amm_writedata
Input
amm_burstcount
Input
amm_readdatavalid
amm_byteenable
DDR3, DDR4 with Hard PHY &
Hard Controller
QDR II/II+/II+ Xtreme, QDR
IV
Output
Input
•
•
DDR3, DDR4 with Hard PHY &
Hard Controller and Enable
DM pins=True
QDR II/II+/II+ Xtreme with
Enable BWS# pins=True
External Memory Interface Handbook Volume 2: Design Guidelines
368
Description
The Avalon-MM signals between
the external memory interface IP
and the user logic.
Connect this interface to the
Avalon-MM Master of the user logic
that needs to access the external
memory device. For QDR II/II+/II
+ Xtreme, connect the
ctrl_amm_avalon_slave_0 to
the user logic for read request and
connect the
ctrl_amm_avalon_slave_1 to
the user logic for write request.
In Ping Pong PHY mode, each
interface controls only one
memory device. Connect
ctrl_amm_avalon_slave_0 to
the user logic that will access the
first memory device, and connect
ctrl_amm_avalon_slave_1 to
the user logic that will access the
secondary memory device.
7 Implementing and Parameterizing Memory IP
Table 304.
Interface: ctrl_auto_precharge_conduit_end
Interface type: Conduit
Signals in Interface
Direction
ctrl_auto_precharge_req
Table 305.
Input
Availability
•
DDR3, DDR4 with Hard PHY &
Hard Controller and Enable
Auto-Precharge
Control=True
Description
The auto-precharge control input
signal. Asserting the
ctrl_auto_precharge_req
signal while issuing a read or write
burst instructs the external
memory interface IP to issue read
or write with auto-precharge to
the external memory device. This
precharges the row immediately
after the command currently
accessing it finishes, potentially
speeding up a future access to a
different row of the same bank.
Connect this interface to the
conduit of the user logic block that
controls when the external
memory interface IP needs to
issue read or write with autoprecharge to the external memory
device.
Interface: ctrl_ecc_user_interrupt_conduit_end
Interface type: Conduit
Signals in Interface
ctrl_ecc_user_interrupt
Table 306.
Direction
Availability
Output
•
Description
DDR3, DDR4 with Hard
PHY & Hard Controller
and Enable Error
Detection and
Correction Logic = True
Controller ECC user interrupt
interface for connection to a
custom control block that
must be notified when ECC
errors occur.
Interface: ctrl_mmr_avalon_slave
Interface type: Avalon Memory-Mapped Slave
Signals in Interface
mmr_waitrequest
Direction
Output
mmr_read
Input
mmr_write
Input
mmr_address
Input
mmr_readdata
Output
mmr_writedata
Input
mmr_burstcount
Input
mmr_byteenable
Input
mmr_beginbursttransfer
Input
mmr_readdatavalid
Availability
•
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
and Enable Memory-Mapped
Configuration and Status
Register (MMR)=True
Description
The Avalon-MM signals between
the external memory interface IP
and the user logic.
Connect this interface to the
Avalon-MM master of the user
logic that needs to access the
Memory-Mapped Configuration and
Status Register (MMR) in the
external memory interface IP.
Output
External Memory Interface Handbook Volume 2: Design Guidelines
369
7 Implementing and Parameterizing Memory IP
Table 307.
Interface: ctrl_power_down_conduit_end
Interface type: Conduit
Signals in Interface
ctrl_power_down_ack
Table 308.
Direction
Output
•
Availability
Description
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
and Enable Auto Power
Down=True
The auto power-down
acknowledgment signals. When
the ctrl_power_down_ack
signal is asserted, it indicates that
the external memory interface IP
is placing the external memory
device into power-down mode.
Connect this interface to the
conduit of the user logic block that
requires the auto power-down
status, or leave it unconnected.
Interface: ctrl_user_priority_conduit_end
Interface type: Conduit
Signals in Interface
ctrl_user_priority_hi
Direction
Input
•
•
Table 309.
Availability
Description
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
Avalon Memory-Mapped and
Enable Command Priority
Control=true
The command priority control
input signal. Asserting the
ctrl_user_priority_hi signal
while issuing a read or write
request instructs the external
memory interface to treat it as a
high-priority command. The
external memory interface
attempts to execute high-priority
commands sooner, to reduce
latency.
Connect this interface to the
conduit of the user logic block that
determines when the external
memory interface IP treats the
read or write request as a highpriority command.
Interface: emif_usr_clk_clock_source
Interface type: Clock Output
Signals in Interface
emif_usr_clk
Direction
Output
•
•
•
Availability
Description
DDR3, DDR4, LPDDR3, with
Hard PHY & Hard Controller
QDR II/II+/II+ Xtreme
QDR IV
The user clock output signal. The
clock frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_0
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
External Memory Interface Handbook Volume 2: Design Guidelines
370
7 Implementing and Parameterizing Memory IP
Table 310.
Interface: emif_usr_reset_reset_source
Interface type: Reset Output
Signals in Interface
emif_usr_reset_n
Direction
Output
•
•
•
Availability
Description
DDR3, DDR4, LPDDR3 with
Hard PHY & Hard Controller
QDR II/II+/II+ Xtreme
QDR IV
The user reset output signal.
Asserted when the PLL becomes
unlocked or the PHY is reset.
Asynchronous assertion and
synchronous deassertion.
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_0
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
Table 311.
Interface: emif_usr_clk_sec_clock_source
Interface type: Clock Output
Signals in Interface
emif_usr_clk_sec
Direction
Output
•
Availability
Description
DDR3, DDR4, with Ping Pong
PHY
The user clock output signal. The
clock frequency in relation to the
memory clock frequency depends
on the Clock rate of user logic
value set in the parameter editor.
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_1
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
Table 312.
Interface: emif_usr_reset_sec_reset_source
Interface type: Reset Output
Signals in Interface
emif_usr_reset_n_sec
Direction
Output
•
Availability
Description
DDR3, DDR4, with Ping Pong
PHY
The user reset output signal.
Asserted when the PLL becomes
unlocked or the PHY is reset.
Asynchronous assertion and
synchronous deassertion.
Connect this interface to the clock
input of the respective user logic
connected to the
ctrl_amm_avalon_slave_1
interface, or to any user logic
block that must be clocked at the
generated clock frequency.
Table 313.
Interface: global_reset_reset_sink
Interface type: Reset Input
Signals in Interface
global_reset_n
Direction
Input
Availability
•
Core Clock Sharing=No
Sharing / Master
Description
The global reset input signal.
Asserting the global_reset_n
signal causes the external memory
interface IP to be reset and
recalibrated.
External Memory Interface Handbook Volume 2: Design Guidelines
371
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
Availability
Description
Connect this interface to the reset
output of the asynchronous or
synchronous reset source that
controls when the external
memory interface IP needs to be
reset and recalibrated.
Table 314.
Interface: mem_conduit_end
Interface type: Conduit
The memory interface signals between the external memory interface IP and the external memory device.
Export this interface to the top level for I/O assignments. Typically mem_rm[0] and mem_rm[1] connect to
CS2# and CS3# of the memory buffer of all LRDIMM slots.
Signals in Interface
Direction
Availability
mem_ck
Output
Always available
mem_ck_n
Output
mem_reset_n
Output
mem_a
Output
mem_k_n
Output
•
QDR II
mem_ras_n
Output
•
DDR3
mem_cas_n
Output
mem_odt
Output
•
DDR3, DDR4, LPDDR3
mem_dqs
Bidirectional
mem_dqs_n
Bidirectional
mem_ba
Output
•
DDR3, DDR4, RLDRAM 3
mem_cs_n
Output
•
DDR3, DDR4, LPDDR3, RLDRAM 3
mem_dq
Bidirectional
mem_we_n
Output
•
DDR3, RLDRAM 3
mem_dm
Output
•
DDR3, LPDDR3, RLDRAM 3 with Enable DM pins=True
mem_rm
Output
•
DDR3, RLDRAM 3 with Memory format=LRDIMM and Number of
rank multiplication pins > 0
mem_par
Output
•
•
DDR3 with Memory format=RDIMM / LRDIMM
DDR4 with Enable alert_n/par pins=True
mem_alert_n
Input
mem_cke
Output
•
DDR3, DDR4, LPDDR3
mem_bg
Output
•
DDR4
mem_act_n
Output
mem_dbi_n
Bidirectional
•
DDR4 with Enable DM pins=True or Write DBI=True or Read
DBI=True
mem_k
Output
•
QDR II/II+/II+ Xtreme
mem_wps_n
Output
mem_rps_n
Output
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
372
7 Implementing and Parameterizing Memory IP
Signals in Interface
Direction
mem_doff_n
Output
mem_d
Output
mem_q
Input
mem_cq
Input
mem_cq_n
Input
mem_bws_n
Output
mem_dk
Output
mem_dk_n
Output
mem_ref_n
Output
Availability
mem_qk
Input
•
QDR II/II+/II+ Xtreme with Enable BWS# pins=True
mem_qk_n
Input
•
RLDRAM 3
Output
•
QDR IV with Use Address Parity Bit=True
mem_pe_n
Input
•
QDR IV with Use Address Parity Bit=True
mem_ainv
Output
•
QDR IV with Address Bus Inversion=True
mem_lda_n
Output
•
QDR IV
mem_lda_b
Output
•
QDR IV
mem_rwa_n
Output
•
QDR IV
mem_rwb_n
Output
•
QDR IV
mem_cfg_n
Output
•
QDR IV
mem_lbk0_n
Output
•
QDR IV
mem_lbk1_n
Output
•
QDR IV
mem_dka
Output
•
QDR IV
mem_dka_n
Output
•
QDR IV
mem_dkb
Output
•
QDR IV
mem_dkb_n
Output
•
QDR IV
mem_qka
Input
•
QDR IV
mem_qka_n
Input
•
QDR IV
mem_qkb
Input
•
QDR IV
mem_qkb_n
Input
•
QDR IV
mem_dqa
Bidirectional
•
QDR IV
mem_dqb
Bidirectional
•
QDR IV
mem_dinva
Bidirectional
•
QDR IV with Data Bus Inversion=True
mem_dinvb
Bidirectional
•
QDR IV with Data Bus Inversion=True
mem_ap
External Memory Interface Handbook Volume 2: Design Guidelines
373
7 Implementing and Parameterizing Memory IP
Table 315.
Interface: oct_conduit_end
Interface type: Conduit
Signals in Interface
oct_rzqin
Table 316.
Input
Availability
Always available
Description
The On-Chip Termination (OCT)
RZQ reference resistor input
signal.
Export this interface to the top
level for I/O assignments.
Interface: pll_ref_clk_clock_sink
Signals in Interface
pll_ref_clk
Table 317.
Direction
Interface
Type
Direction
Clock Input
Input
Availability
•
Core clock
sharing=No Sharing /
Master
Description
The PLL reference clock
input signal.
Connect this interface to
the clock output of the
clock source that matches
the PLL reference clock
frequency value set in the
parameter editor.
Interface: status_conduit_end
Signals in Interface
local_cal_success
Interface
Type
Direction
Conduit
Output
local_cal_fail
Availability
Always available
Description
The PHY calibration status
output signals. When the
local_cal_success
signal is asserted, it
indicates that the PHY
calibration was successful.
Otherwise, if
local_cal_fail
signal is asserted, it
indicates that PHY
calibration has failed.
Connect this interface to
the conduit of the user
logic block that requires
the calibration status
information, or leave it
unconnected.
7.5.2 Generated Files for Stratix 10 External Memory Interface IP
When you complete the IP generation flow, there are generated files created in your
project directory. The directory structure created varies somewhat, depending on the
tool used to parameterize and generate the IP.
Note:
The PLL parameters are statically defined in the <variation_name>_parameters.tcl
at generation time. To ensure timing constraints and timing reports are correct, when
you edit the PLL parameters, apply those changes to the PLL parameters in this file.
The following table lists the generated directory structure and key files created when
generating the IP.
External Memory Interface Handbook Volume 2: Design Guidelines
374
7 Implementing and Parameterizing Memory IP
Table 318.
Generated Directory Structure and Key Files for Synthesis
Directory
File Name
Description
working_dir/
working_dir/<Top-level Name>/
The Qsys files for your IP component
or system based on your configuration.
working_dir/<Top-level Name>/
*.ppf
Pin Planner File for use with the Pin
Planner.
working_dir/<Top-level Name>/
synth/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated top-level wrapper for
synthesis.
working_dir/<Top-level Name>/
altera_emif_S10<acds
version>/synth/
*.v or (*.v and *.vhd)
Stratix 10 EMIF (non-HPS) top-level
dynamic wrapper files for synthesis.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP core.
working_dir/<Top-level Name>/
altera_emif_s10_hps_<acds
version>/synth/
*.v or (*.v and *.vhd)
Stratix 10 EMIF for HPS top-level
dynamic wrapper files for synthesis.
working_dir/<Top-level Name>/
altera_emif_arch_nd_<acds
version>/synth/
*.sv, *.sdc, *.tcl and *.hex and
*_readme.txt
Stratix 10 EMIF Core RTL, constraints
files, ROM content files and information
files for synthesis.
Whether the file type is set to Verilog
or VHDL, all the Stratix 10 EMIF Core
RTL files will be generated as a
SystemVerilog file. The readme.txt file
contains information and guidelines
specific to your configuration.
working_dir/<Top-level Name>/
<other components>_<acds
version>/synth/
*
Other EMIF ECC, EMIF Debug Interface
IP or Merlin Interconnect component
files for synthesis.
Table 319.
Generated Directory Structure and Key Files for Simulation
Directory
File Name
Description
working_dir/<Top-level
Name>/sim/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated top-level wrapper for
simulation.
working_dir/<Top-level
Name>/sim/<simulator vendor>/
*.tcl, *cds.lib, *.lib, *.var,
*.sh, *.setup
Simulator-specific simulation scripts.
working_dir/<Top-level Name>/
altera_emif_s10<acds
version>/sim/
*.v or *.vhd
Stratix 10 EMIF (non-HPS) top-level
dynamic wrapper files for simulation.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP cores.
working_dir/<Top-level Name>/
altera_emif_s10_hps_<acds
version>/sim/
*.v or *.vhd
Stratix 10 EMIF for HPS top-level
dynamic wrapper files for simulation.
working_dir/<Top-level Name>/
altera_emif_arch_nd_<acds
version>/sim/
*sv or (*.sv and *.vhd), *.hex and
*_readme.txt
Stratix 10 EMIF RTL, ROM content files,
and information files for simulation.
For SystemVerilog / Mix language
simulator, you may directly use the
files from this folder. For VHDL-only
simulator, other than the ROM content
files, you have to use files in
<current folder>/mentor
directory instead.
The readme.txt file contains
information and guidelines specific to
your configuration.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
375
7 Implementing and Parameterizing Memory IP
Directory
File Name
Other EMIF ECC, EMIF Debug Interface
IP, or Merlin Interconnect component
files for simulation
working_dir/<Top-level Name>/
<other components>_<acds
version>/sim/
Table 320.
Description
Generated Directory Structure and Key Files for Qsys-Generated Testbench
System
Directory
File Name
Description
working_dir/<Top-level
Name>_tb/
*.qsys
The Qsys files for the QSYS generated
testbench system.
working_dir/<Top-level
Name>_tb/sim/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated testbench file for
simulation.
This wrapper instantiates BFM
components. For Stratix 10 EMIF IP,
this module should instantiate the
memory model for the memory conduit
being exported from your created
system.
working_dir/<Top-level
Name>_tb/<Top-level
Name>_<id>/sim/
<Top-level Name>.v or <Toplevel Name>.vhd
Qsys generated top-level wrapper for
simulation.
working_dir/<Top-level
Name>_tb/sim/<simulator
vendor>/
*.tcl, *cds.lib, *.lib, *.var,
*.sh, *.setup
Simulator-specific simulation scripts.
working_dir/<Top-level
Name>_tb/sim/<simulator
vendor>/
*.v or *.vhd
Stratix 10 EMIF (non-HPS) top-level
dynamic wrapper files for simulation.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP cores.
working_dir/<Top-level
Name>_tb/
altera_emif_a10_hps_<acds
version>/sim/
*.v or *.vhd
Stratix 10 EMIF for HPS top-level
dynamic wrapper files for simulation.
working_dir/<Top-level
Name>_tb/
altera_emif_arch_nf_<acds
version>/sim/
*sv or (*.sv and *.vhd), *.hex and
*_readme.txt
Stratix 10 EMIF Core RTL, ROM content
files and information files for
simulation.
For SystemVerilog / Mix language
simulator, you may use the files from
this folder. For VHDL-only simulator,
other than the ROM content files, you
must use files in the <current
folder>/mentor directory instead.
The readme.txt file contains
information and guidelines specific to
your configuration.
working_dir/<Top-level
Name>_tb/sim/
altera_emif_arch_nf_<acds
version>/sim/mentor/
*.sv and *.vhd
Stratix 10 EMIF Core RTL for
simulation.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
376
7 Implementing and Parameterizing Memory IP
Directory
File Name
Description
Only available when you create a VHDL
simulation model. All .sv files are
Mentor-tagged encrypted IP (IEEE
Encrypted Verilog) for VHDL-only
simulator support.
working_dir/<Top-level
Name>_tb/<other
components>_<acds
version>/sim/
*
Other EMIF ECC, EMIF Debug Interface
IP or Merlin Interconnect component
files for simulation.
working_dir/<Top-level
Name>_tb/<other
components>_<acds
version>/sim/mentor/
*
Other EMIF ECC, EMIF Debug Interface
IP or Merlin Interconnect component
files for simulation.
Only available depending on individual
component simulation model support
and when creating a VHDL simulation
model. All files in this folder are
Mentor-tagged encrypted IP (IEEE
Encrypted Verilog) for VHDL-only
simulator support.
Table 321.
Generated Directory Structure and Key Files for Example Simulation Designs
Directory
File Name
Description
working_dir/
*_example_design*/
*.qsys, *.tcl and readme.txt
Qsys files, generation scripts, and
information for generating the Stratix
10 EMIF IP example design.
These files are available only when you
generate an example design. You may
open the .qsys file in Qsys to add more
components to the example design.
working_dir/
*_example_design*/sim/
ed_sim/sim/
*.v or *.vhd
Qsys-generated top-level wrapper for
simulation.
working_dir/
*_example_design*/sim/
ed_sim/<simulator vendor>/
*.tcl, *cds.lib, *.lib, *.var,
*.sh, *.setup
Simulator-specific simulation scripts.
working_dir/
*_example_design*/sim/ip/
ed_sim/ed_sim_emif_s10_0/
altera_emif_s10_<acds_version
>/sim/
*.v or *.vhd
Stratix 10 EMIF (non-HPS) top-level
dynamic wrapper files for simulation.
This wrapper instantiates the EMIF ECC
and EMIF Debug Interface IP cores.
working_dir/
*_example_design*/sim/ip/
ed_sim/ed_sim_emif_s10_0/
altera_emif_arch_nd_<acds_ver
sion>/sim/
*sv or (*.sv and *.vhd), *.hex and
*_readme.txt
Stratix 10 EMIF RTL, ROM content files,
and information files for simulation. For
SystemVerilog / Mix language
simulator, you may directly use the
files from this folder. For VHDL-only
simulator, other than the ROM content
files, you have to use files in
<current folder>/mentor
directory instead. The readme.txt
file contains information and guidelines
specific to your configuration.
working_dir/
*_example_design*/sim/ed_sim/
<other
components>_<acds_version>/si
m/
*
Other EMIF ECC, EMIF Debug Interface
IP, or Merlin Interconnect component
files for simulation
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
377
7 Implementing and Parameterizing Memory IP
Directory
File Name
Description
and
working_dir/
*_example_design*/sim/ip/
ed_sim/
<other_components>/sim/
and
working_dir/
*_example_design*/sim/ip/
ed_sim/<other_components>/
<other_components>_<acds_vers
ion>/sim/
Table 322.
Generated Directory Structure and Key Files for Example Synthesis Designs
Directory
File Name
Description
working_dir/
*_example_design*/
*.qsys, *.tcl and readme.txt
Qsys files, generation scripts, and
information for generating the Stratix
10 EMIF IP example design.
These files are available only when you
generate an example design. You may
open the .qsys file in Qsys to add more
components to the example design.
working_dir/
*_example_design*/qii/
ed_synth/synth
*.v or (*.v and *.vhd)
Qsys-generated top-level wrapper for
synthesis.
working_dir/
*_example_design*/qii/ip/
ed_synth/
end_synth_emif_s10_0/
altera_emif_s10_<acds_version
>/synth
*.v or (*.v and *.vhd)
Stratix 10 EMIF (non HPS) top-level
dynamic wrapper files for synthesis.
This wrapper instantiates the EMIF ECC
and EMIF debug interface core IP.
working_dir/
*_example_design*/qii/ip/
ed_synth/ed_synth_emif_s10_0/
altera_emif_arch_nd_<acds_ver
sion>/synth/
*.sv, *.sdc, *.tcl and *.hex and
*_readme.txt
Stratix 10 EMIF Core RTL, constraints
files, ROM content files and information
files for synthesis. Whether the file
type is set to Verilog or VHDL, all the
Stratix 10 EMIF Core RTL files will be
generated as a System verilog file. The
readme.txt file contains information
and guidelines specific to your
configuration.
working_dir/
*_example_design*/qii/
ed_synth/<other
components>_<acds_version>/
synth
*
Other EMIF ECC, EMIF debug interface
IP, or Merlin interconnect component
files for synthesis.
and
working_dir/
*_example_design*/qii/ip/
ed_synth/<other components>/
synth
and
working_dir/
*_example_design*/qii/ip/
ed_synth/<other components>/
<other
components>_acds_version>/
synth
External Memory Interface Handbook Volume 2: Design Guidelines
378
7 Implementing and Parameterizing Memory IP
7.5.3 Stratix 10 EMIF IP DDR4 Parameters
The Stratix 10 EMIF IP parameter editor allows you to parameterize settings for the
Stratix 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
The following tables describe the parameterization settings available in the parameter
editor for the Stratix 10 EMIF IP.
7.5.3.1 Stratix 10 EMIF IP DDR4 Parameters: General
Table 323.
Group: General / FPGA
Display Name
Speed grade
Table 324.
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
Group: General / Interface
Display Name
Identifier
Description
Configuration
PHY_CONFIG_ENUM
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
Table 325.
Group: General / Clocks
Display Name
Core clocks sharing
Identifier
Description
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
379
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Use recommended PLL
reference clock frequency
PHY_DDR4_DEFAULT_
REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
Table 326.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
PLL_USER_NUM_OF_E
XTRA_CLKS
Description
Specifies the number of additional output clocks to create
from the PLL.
7.5.3.2 Stratix 10 EMIF IP DDR4 Parameters: Memory
External Memory Interface Handbook Volume 2: Design Guidelines
380
7 Implementing and Parameterizing Memory IP
Table 327.
Group: Memory / Topology
Display Name
Identifier
Description
DQS group of ALERT#
MEM_DDR4_ALERT_N
_DQS_GROUP
Select the DQS group with which the ALERT# pin is placed.
ALERT# pin placement
MEM_DDR4_ALERT_N
_PLACEMENT_ENUM
Specifies placement for the mem_alert_n signal. If you
select "I/O Lane with Address/Command Pins", you can pick
the I/O lane and pin index in the add/cmd bank with the
subsequent drop down menus. If you select "I/O Lane with
DQS Group", you can specify the DQS group with which to
place the mem_alert_n pin. If you select "Automatically
select a location", the IP automatically selects a pin for the
mem_alert_n signal. If you select this option, no additional
location constraints can be applied to the mem_alert_n pin,
or a fitter error will result during compilation. For optimum
signal integrity, you should choose "I/O Lane with Address/
Command Pins". For interfaces containing multiple memory
devices, it is recommended to connect the ALERT# pins
together to the ALERT#pin on the FPGA.
Enable ALERT#/PAR pins
MEM_DDR4_ALERT_P
AR_EN
Allows address/command calibration, which may provide
better margins on the address/command bus. The alert_n
signal is not accessible in the AFI or Avalon domains. This
means there is no way to know whether a parity error has
occurred during user mode. The parity pin is a dedicated pin
in the address/command bank, but the alert_n pin can be
placed in any bank that spans the memory interface. You
should explicitly choose the location of the alert_n pin and
place it in the address/command bank.
Bank address width
MEM_DDR4_BANK_AD
DR_WIDTH
Specifies the number of bank address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of bank
address pins needed for access to all available banks.
Bank group width
MEM_DDR4_BANK_GR
OUP_WIDTH
Specifies the number of bank group pins. Refer to the data
sheet for your memory device. The density of the selected
memory device determines the number of bank group pins
needed for access to all available bank groups.
Chip ID width
MEM_DDR4_CHIP_ID_
WIDTH
Specifies the number of chip ID pins. Only applicable to
registered and load-reduced DIMMs that use 3DS/TSV
memory devices.
Number of clocks
MEM_DDR4_CK_WIDT
H
Specifies the number of CK/CK# clock pairs exposed by the
memory interface. Usually more than 1 pair is required for
RDIMM/LRDIMM formats. The value of this parameter
depends on the memory device selected; refer to the data
sheet for your memory device.
Column address width
MEM_DDR4_COL_ADD
R_WIDTH
Specifies the number of column address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of address
pins needed for access to all available columns.
Number of chip selects per
DIMM
MEM_DDR4_CS_PER_
DIMM
Specifies the number of chip selects per DIMM.
Number of chip selects
MEM_DDR4_DISCRET
E_CS_WIDTH
Specifies the total number of chip selects in the interface,
up to a maximum of 4. This parameter applies to discrete
components only.
Data mask
MEM_DDR4_DM_EN
Indicates whether the interface uses data mask (DM) pins.
This feature allows specified portions of the data bus to be
written to memory (not available in x4 mode). One DM pin
exists per DQS group.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
381
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Number of DQS groups
MEM_DDR4_DQS_WI
DTH
Specifies the total number of DQS groups in the interface.
This value is automatically calculated as the DQ width
divided by the number of DQ pins per DQS group.
DQ pins per DQS group
MEM_DDR4_DQ_PER_
DQS
Specifies the total number of DQ pins per DQS group.
DQ width
MEM_DDR4_DQ_WIDT
H
Specifies the total number of data pins in the interface. The
maximum supported width is 144, or 72 in Ping Pong PHY
mode.
Memory format
MEM_DDR4_FORMAT_
ENUM
Specifies the format of the external memory device. The
following formats are supported: Component - a Discrete
memory device; UDIMM - Unregistered/Unbuffered DIMM
where address/control, clock, and data are unbuffered;
RDIMM - Registered DIMM where address/control and clock
are buffered; LRDIMM - Load Reduction DIMM where
address/control, clock, and data are buffered. LRDIMM
reduces the load to increase memory speed and supports
higher densities than RDIMM; SODIMM - Small Outline
DIMM is similar to UDIMM but smaller in size and is typically
used for systems with limited space. Some memory
protocols may not be available in all formats.
Number of DIMMs
MEM_DDR4_NUM_OF_
DIMMS
Total number of DIMMs.
Number of physical ranks per
DIMM
MEM_DDR4_RANKS_P
ER_DIMM
Number of ranks per DIMM. For LRDIMM, this represents
the number of physical ranks on the DIMM behind the
memory buffer
Read DBI
MEM_DDR4_READ_DB
I
Specifies whether the interface uses read data bus inversion
(DBI). Enable this feature for better signal integrity and
read margin. This feature is not available in x4
configurations.
Row address width
MEM_DDR4_ROW_AD
DR_WIDTH
Specifies the number of row address pins. Refer to the data
sheet for your memory device. The density of the selected
memory device determines the number of address pins
needed for access to all available rows.
Write DBI
MEM_DDR4_WRITE_D
BI
Indicates whether the interface uses write data bus
inversion (DBI). This feature provides better signal integrity
and write margin. This feature is unavailable if Data Mask is
enabled or in x4 mode.
Table 328.
Group: Memory / Latency and Burst
Display Name
Identifier
Description
Addr/CMD parity latency
MEM_DDR4_AC_PARIT
Y_LATENCY
Additional latency incurred by enabling address/command
parity check. Select a value to enable address/command
parity with the latency associated with the selected value.
Select Disable to disable address/command parity.
Memory additive CAS latency
setting
MEM_DDR4_ATCL_EN
UM
Determines the posted CAS additive latency of the memory
device. Enable this feature to improve command and bus
efficiency, and increase system bandwidth.
Burst Length
MEM_DDR4_BL_ENUM
Specifies the DRAM burst length which determines how
many consecutive addresses should be accessed for a given
read/write command.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
382
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Read Burst Type
MEM_DDR4_BT_ENUM
Indicates whether accesses within a given burst are in
sequential or interleaved order. Select sequential if you are
using the Intel-provided memory controller.
Memory CAS latency setting
MEM_DDR4_TCL
Specifies the number of clock cycles between the read
command and the availability of the first bit of output data
at the memory device. Overall read latency equals the
additive latency (AL) + the CAS latency (CL). Overall read
latency depends on the memory device selected; refer to
the datasheet for your device.
Memory write CAS latency
setting
MEM_DDR4_WTCL
Specifies the number of clock cycles from the release of
internal write to the latching of the first data in at the
memory device. This value depends on the memory device
selected; refer to the datasheet for your device.
Table 329.
Group: Memory / Mode Register Settings
Display Name
Identifier
Description
Auto self-refresh method
MEM_DDR4_ASR_ENU
M
Indicates whether to enable or disable auto self-refresh.
Auto self-refresh allows the controller to issue self-refresh
requests, rather than manually issuing self-refresh in order
for memory to retain data.
Fine granularity refresh
MEM_DDR4_FINE_GR
ANULARITY_REFRESH
Increased frequency of refresh in exchange for shorter
refresh. Shorter tRFC and increased cycle time can produce
higher bandwidth.
Internal VrefDQ monitor
MEM_DDR4_INTERNA
L_VREFDQ_MONITOR
Indicates whether to enable the internal VrefDQ monitor.
ODT input buffer during
powerdown mode
MEM_DDR4_ODT_IN_
POWERDOWN
Indicates whether to enable on-die termination (ODT) input
buffer during powerdown mode.
Read preamble
MEM_DDR4_READ_PR
EAMBLE
Number of read preamble cycles. This mode register setting
determines the number of cycles DQS (read) will go low
before starting to toggle.
Self refresh abort
MEM_DDR4_SELF_RFS
H_ABORT
Self refresh abort for latency reduction.
Temperature controlled refresh
enable
MEM_DDR4_TEMP_CO
NTROLLED_RFSH_ENA
Indicates whether to enable temperature controlled refresh,
which allows the device to adjust the internal refresh period
to be longer than tREFI of the normal temperature range by
skipping external refresh commands.
Temperature controlled refresh
range
MEM_DDR4_TEMP_CO
NTROLLED_RFSH_RAN
GE
Indicates temperature controlled refresh range where
normal temperature mode covers 0C to 85C and extended
mode covers 0C to 95C.
Write preamble
MEM_DDR4_WRITE_P
REAMBLE
Write preamble cycles.
7.5.3.3 Stratix 10 EMIF IP DDR4 Parameters: Mem I/O
External Memory Interface Handbook Volume 2: Design Guidelines
383
7 Implementing and Parameterizing Memory IP
Table 330.
Group: Mem I/O / Memory I/O Settings
Display Name
Identifier
Description
DB Host Interface DQ Driver
MEM_DDR4_DB_DQ_
DRV_ENUM
Specifies the driver impedance setting for the host interface
of the data buffer. This parameter determines the value of
the control word BC03 of the data buffer. Perform board
simulation to obtain the optimal value for this setting.
DB Host Interface DQ
RTT_NOM
MEM_DDR4_DB_RTT_
NOM_ENUM
Specifies the RTT_NOM setting for the host interface of the
data buffer. Only "RTT_NOM disabled" is supported. This
parameter determines the value of the control word BC00 of
the data buffer.
DB Host Interface DQ
RTT_PARK
MEM_DDR4_DB_RTT_
PARK_ENUM
Specifies the RTT_PARK setting for the host interface of the
data buffer. This parameter determines the value of control
word BC02 of the data buffer. Perform board simulation to
obtain the optimal value for this setting.
DB Host Interface DQ RTT_WR
MEM_DDR4_DB_RTT_
WR_ENUM
Specifies the RTT_WR setting of the host interface of the
data buffer. This parameter determines the value of the
control word BC01 of the data buffer. Perform board
simulation to obtain the optimal value for this setting.
Use recommended initial
VrefDQ value
MEM_DDR4_DEFAULT
_VREFOUT
Specifies to use the recommended initial VrefDQ value. This
value is used as a starting point and may change after
calibration.
Output drive strength setting
MEM_DDR4_DRV_STR
_ENUM
Specifies the output driver impedance setting at the
memory device. To obtain optimum signal integrity
performance, select option based on board simulation
results.
RCD CA Input Bus Termination
MEM_DDR4_RCD_CA_
IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DA0..DA17, DBA0..DBA1,
DBG0..DBG1, DACT_n, DC2, DPAR. This parameter
determines the value of bits DA[1:0] of control word RC7x
of the registering clock driver. Perform board simulation to
obtain the optimal value for this setting.
RCD DCKE Input Bus
Termination
MEM_DDR4_RCD_CKE
_IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DCKE0, DCKE1. This
parameter determines the value of bits DA[5:4] of control
word RC7x of the registering clock driver. Perform board
simulation to obtain the optimal value for this setting.
RCD DCS[3:0]_n Input Bus
Termination
MEM_DDR4_RCD_CS_
IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DCS[3:0]_n. This parameter
determines the value of bits DA[3:2] of control word RC7x
of the registering clock driver. Perform board simulation to
obtain the optimal value for this setting.
RCD DODT Input Bus
Termination
MEM_DDR4_RCD_ODT
_IBT_ENUM
Specifies the input termination setting for the following pins
of the registering clock driver: DODT0, DODT1. This
parameter determines the value of bits DA[7:6] of control
word RC7x of the registering clock driver. Perform board
simulation to obtain the optimal value for this setting.
ODT Rtt nominal value
MEM_DDR4_RTT_NOM
_ENUM
Determines the nominal on-die termination value applied to
the DRAM. The termination is applied any time that ODT is
asserted. If you specify a different value for RTT_WR, that
value takes precedence over the values mentioned here. For
optimum signal integrity performance, select your option
based on board simulation results.
RTT PARK
MEM_DDR4_RTT_PAR
K
If set, the value is applied when the DRAM is not being
written AND ODT is not asserted HIGH.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
384
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Dynamic ODT (Rtt_WR) value
MEM_DDR4_RTT_WR_
ENUM
Specifies the mode of the dynamic on-die termination (ODT)
during writes to the memory device (used for multi-rank
configurations). For optimum signal integrity performance,
select this option based on board simulation results.
RCD and DB Manufacturer
(LSB)
MEM_DDR4_SPD_133
_RCD_DB_VENDOR_L
SB
Specifies the LSB of the ID code of the registering clock
driver and data buffer manufacturer. The value must come
from Byte 133 of the SPD from the DIMM vendor.
RCD and DB Manufacturer
(MSB)
MEM_DDR4_SPD_134
_RCD_DB_VENDOR_M
SB
Specifies the MSB of the ID code of the registering clock
driver and data buffer manufacturer. The value must come
from Byte 134 of the SPD from the DIMM vendor.
RCD Revision Number
MEM_DDR4_SPD_135
_RCD_REV
Specifies the die revision of the registering clock driver. The
value must come from Byte 135 of the SPD from the DIMM
vendor.
SPD Byte 137 - RCD Drive
Strength for Command/
Address
MEM_DDR4_SPD_137
_RCD_CA_DRV
Specifies the drive strength of the registering clock driver's
control and command/address outputs to the DRAM. The
value must come from Byte 137 of the SPD from the DIMM
vendor.
SPD Byte 138 - RCD Drive
Strength for CK
MEM_DDR4_SPD_138
_RCD_CK_DRV
Specifies the drive strength of the registering clock driver's
clock outputs to the DRAM. The value must come from Byte
138 of the SPD from the DIMM vendor.
DB Revision Number
MEM_DDR4_SPD_139
_DB_REV
Specifies the die revision of the data buffer. The value must
come from Byte 139 of the SPD from the DIMM vendor.
SPD Byte 140 - DRAM VrefDQ
for Package Rank 0
MEM_DDR4_SPD_140
_DRAM_VREFDQ_R0
Specifies the VrefDQ setting for package rank 0 of an
LRDIMM. The value must come from Byte 140 of the SPD
from the DIMM vendor.
SPD Byte 141 - DRAM VrefDQ
for Package Rank 1
MEM_DDR4_SPD_141
_DRAM_VREFDQ_R1
Specifies the VrefDQ setting for package rank 1 of an
LRDIMM. The value must come from Byte 141 of the SPD
from the DIMM vendor.
SPD Byte 142 - DRAM VrefDQ
for Package Rank 2
MEM_DDR4_SPD_142
_DRAM_VREFDQ_R2
Specifies the VrefDQ setting for package rank 2 (if it exists)
of an LRDIMM. The value must come from Byte 142 of the
SPD from the DIMM vendor.
SPD Byte 143 - DRAM VrefDQ
for Package Rank 3
MEM_DDR4_SPD_143
_DRAM_VREFDQ_R3
Specifies the VrefDQ setting for package rank 3 (if it exists)
of an LRDIMM. The value must come from Byte 143 of the
SPD from the DIMM vendor.
SPD Byte 144 - DB VrefDQ for
DRAM Interface
MEM_DDR4_SPD_144
_DB_VREFDQ
Specifies the VrefDQ setting of the data buffer's DRAM
interface. The value must come from Byte 144 of the SPD
from the DIMM vendor.
SPD Byte 145-147 - DB MDQ
Drive Strength and RTT
MEM_DDR4_SPD_145
_DB_MDQ_DRV
Specifies the drive strength of the MDQ pins of the data
buffer's DRAM interface. The value must come from either
Byte 145 (data rate = 1866), 146 (1866 data rate = 2400),
or 147 (2400 data rate = 3200) of the SPD from the DIMM
vendor.
SPD Byte 148 - DRAM Drive
Strength
MEM_DDR4_SPD_148
_DRAM_DRV
Specifies the drive strength of the DRAM. The value must
come from Byte 148 of the SPD from the DIMM vendor.
SPD Byte 149-151 - DRAM ODT
(RTT_WR and RTT_NOM)
MEM_DDR4_SPD_149
_DRAM_RTT_WR_NOM
Specifies the RTT_WR and RTT_NOM setting of the DRAM.
The value must come from either Byte 149 (data rate =
1866), 150 (1866 data rate = 2400), or 151 (2400 data
rate = 3200) of the SPD from the DIMM vendor.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
385
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
SPD Byte 152-154 - DRAM ODT
(RTT_PARK)
MEM_DDR4_SPD_152
_DRAM_RTT_PARK
Specifies the RTT_PARK setting of the DRAM. The value
must come from either Byte 152 (data rate = 1866), 153
(1866 data rate = 2400), or 154 (2400 data rate = 3200) of
the SPD from the DIMM vendor.
VrefDQ training range
MEM_DDR4_VREFDQ_
TRAINING_RANGE
VrefDQ training range.
VrefDQ training value
MEM_DDR4_VREFDQ_
TRAINING_VALUE
VrefDQ training value.
Table 331.
Group: Mem I/O / ODT Activation
Display Name
Use Default ODT Assertion
Tables
Identifier
Description
MEM_DDR4_USE_DEF
AULT_ODT
Enables the default ODT assertion pattern as determined
from vendor guidelines. These settings are provided as a
default only; you should simulate your memory interface to
determine the optimal ODT settings and assertion patterns.
7.5.3.4 Stratix 10 EMIF IP DDR4 Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
Table 332.
Group: FPGA IO / FPGA IO Settings
Display Name
Identifier
Description
Use default I/O settings
PHY_DDR4_DEFAULT_
IO
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_DDR4_IO_VOLTA
GE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
External Memory Interface Handbook Volume 2: Design Guidelines
386
7 Implementing and Parameterizing Memory IP
Table 333.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_DDR4_USER_AC
_IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_DDR4_USER_AC
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR4_USER_AC
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 334.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_DDR4_USER_CK
_IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_DDR4_USER_CK
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR4_USER_CK
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 335.
Group: FPGA IO / Data Bus
Display Name
Identifier
Description
Use recommended initial Vrefin
PHY_DDR4_USER_AU
TO_STARTING_VREFI
N_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_DDR4_USER_DA
TA_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_DDR4_USER_DA
TA_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_DDR4_USER_DA
TA_OUT_MODE_ENUM
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_DDR4_USER_STA
RTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
387
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 336.
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_DDR4_USER_PLL
_REF_CLK_IO_STD_E
NUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_DDR4_USER_RZ
Q_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
7.5.3.5 Stratix 10 EMIF IP DDR4 Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 337.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Speed bin
MEM_DDR4_SPEEDBI
N_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
TdiVW_total
MEM_DDR4_TDIVW_T
OTAL_UI
TdiVW_total describes the minimum horizontal width of the
DQ eye opening required by the receiver (memory device/
DIMM). It is measured in UI (1UI = half the memory clock
period).
tDQSCK
MEM_DDR4_TDQSCK_
PS
tDQSCK describes the skew between the memory clock (CK)
and the input data strobes (DQS) used for reads. It is the
time between the rising data strobe edge (DQS, DQS#)
relative to the rising CK edge.
tDQSQ
MEM_DDR4_TDQSQ_U
I
tDQSQ describes the latest valid transition of the associated
DQ pins for a READ. tDQSQ specifically refers to the DQS,
DQS# to DQ skew. It is the length of time between the
DQS, DQS# crossing to the last valid transition of the
slowest DQ pin in the DQ group associated with that DQS
strobe.
tDQSS
MEM_DDR4_TDQSS_C
YC
tDQSS describes the skew between the memory clock (CK)
and the output data strobes used for writes. It is the time
between the rising data strobe edge (DQS, DQS#) relative
to the rising CK edge.
tDSH
MEM_DDR4_TDSH_CY
C
tDSH specifies the write DQS hold time. This is the time
difference between the rising CK edge and the falling edge
of DQS, measured as a percentage of tCK.
tDSS
MEM_DDR4_TDSS_CY
C
tDSS describes the time between the falling edge of DQS to
the rising edge of the next CK transition.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
388
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tIH (base) DC level
MEM_DDR4_TIH_DC_
MV
tIH (base) DC level refers to the voltage level which the
address/command signal must not cross during the hold
window. The signal is considered stable only if it remains
above this voltage level (for a logic 1) or below this voltage
level (for a logic 0) for the entire hold period.
tIH (base)
MEM_DDR4_TIH_PS
tIH (base) refers to the hold time for the Address/Command
(A) bus after the rising edge of CK. Depending on what AC
level the user has chosen for a design, the hold margin can
vary (this variance will be automatically determined when
the user choses the "tIH (base) AC level").
tINIT
MEM_DDR4_TINIT_US
tINIT describes the time duration of the memory
initialization after a device power-up. After RESET_n is deasserted, wait for another 500us until CKE becomes active.
During this time, the DRAM will start internal initialization;
this will be done independently of external clocks.
tIS (base) AC level
MEM_DDR4_TIS_AC_
MV
tIS (base) AC level refers to the voltage level which the
address/command signal must cross and remain above
during the setup margin window. The signal is considered
stable only if it remains above this voltage level (for a logic
1) or below this voltage level (for a logic 0) for the entire
setup period.
tIS (base)
MEM_DDR4_TIS_PS
tIS (base) refers to the setup time for the Address/
Command/Control (A) bus to the rising edge of CK.
tMRD
MEM_DDR4_TMRD_CK
_CYC
The mode register set command cycle time, tMRD is the
minimum time period required between two MRS
commands.
tQH
MEM_DDR4_TQH_UI
tQH specifies the output hold time for the DQ in relation to
DQS, DQS#. It is the length of time between the DQS,
DQS# crossing to the earliest invalid transition of the
fastest DQ pin in the DQ group associated with that DQS
strobe.
tQSH
MEM_DDR4_TQSH_CY
C
tQSH refers to the differential High Pulse Width, which is
measured as a percentage of tCK. It is the time during
which the DQS is high for a read.
tRAS
MEM_DDR4_TRAS_NS
tRAS describes the activate to precharge duration. A row
cannot be deactivated until the tRAS time has been met.
Therefore tRAS determines how long the memory has to
wait after a activate command before a precharge command
can be issued to close the row.
tRCD
MEM_DDR4_TRCD_NS
tRCD, row command delay, describes the amount of delay
between the activation of a row through the RAS command
and the access to the data through the CAS command.
tRP
MEM_DDR4_TRP_NS
tRP refers to the Precharge (PRE) command period. It
describes how long it takes for the memory to disable
access to a row by precharging and before it is ready to
activate a different row.
tWLH
MEM_DDR4_TWLH_PS
tWLH describes the write leveling hold time from the rising
edge of DQS to the rising edge of CK.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
389
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tWLS
MEM_DDR4_TWLS_PS
tWLS describes the write leveling setup time. It is measured
from the rising edge of CK to the rising edge of DQS.
tWR
MEM_DDR4_TWR_NS
tWR refers to the Write Recovery time. It specifies the
amount of clock cycles needed to complete a write before a
precharge command can be issued.
VdiVW_total
MEM_DDR4_VDIVW_T
OTAL
VdiVW_total describes the Rx Mask voltage, or the minimum
vertical width of the DQ eye opening required by the
receiver (memory device/DIMM). It is measured mV.
Table 338.
Group: Mem Timing / Parameters dependent on Speed Bin, Operating
Frequency, and Page Size
Display Name
Identifier
Description
tCCD_L
MEM_DDR4_TCCD_L_
CYC
tCCD_L refers to the CAS_n-to-CAS_n delay (long). It is the
minimum time interval between two read/write (CAS)
commands to the same bank group.
tCCD_S
MEM_DDR4_TCCD_S_
CYC
tCCD_S refers to the CAS_n-to-CAS_n delay (short). It is
the minimum time interval between two read/write (CAS)
commands to different bank groups.
tFAW_dlr
MEM_DDR4_TFAW_DL
R_CYC
tFAW_dlr refers to the four activate window to different
logical ranks. It describes the period of time during which
only four banks can be active across all logical ranks within
a 3DS DDR4 device.
tFAW
MEM_DDR4_TFAW_NS
tFAW refers to the four activate window time. It describes
the period of time during which only four banks can be
active.
tRRD_dlr
MEM_DDR4_TRRD_DL
R_CYC
tRRD_dlr refers to the Activate to Activate Command Period
to Different Logical Ranks. It is the minimum time interval
(measured in memory clock cycles) between two activate
commands to different logical ranks within a 3DS DDR4
device.
tRRD_L
MEM_DDR4_TRRD_L_
CYC
tRRD_L refers to the Activate to Activate Command Period
(long). It is the minimum time interval (measured in
memory clock cycles) between two activate commands to
the same bank group.
tRRD_S
MEM_DDR4_TRRD_S_
CYC
tRRD_S refers to the Activate to Activate Command Period
(short). It is the minimum time interval between two
activate commands to the different bank groups.
tRTP
MEM_DDR4_TRTP_CY
C
tRTP refers to the internal READ Command to PRECHARGE
Command delay. It is the number of memory clock cycles
that is needed between a read command and a precharge
command to the same rank.
tWTR_L
MEM_DDR4_TWTR_L_
CYC
tWTR_L or Write Timing Parameter describes the delay from
start of internal write transaction to internal read command,
for accesses to the same bank group. The delay is
measured from the first rising memory clock edge after the
last write data is received to the rising memory clock edge
when a read command is received.
tWTR_S
MEM_DDR4_TWTR_S_
CYC
tWTR_S or Write Timing Parameter describes the delay from
start of internal write transaction to internal read command,
for accesses to the different bank group. The delay is
measured from the first rising memory clock edge after the
last write data is received to the rising memory clock edge
when a read command is received.
External Memory Interface Handbook Volume 2: Design Guidelines
390
7 Implementing and Parameterizing Memory IP
Table 339.
Group: Mem Timing / Parameters dependent on Density and Temperature
Display Name
Identifier
Description
tREFI
MEM_DDR4_TREFI_US
tREFI refers to the average periodic refresh interval. It is
the maximum amount of time the memory can tolerate in
between each refresh command
tRFC_dlr
MEM_DDR4_TRFC_DL
R_NS
tRFC_dlr refers to the Refresh Cycle Time to different logical
rank. It is the amount of delay after a refresh command to
one logical rank before an activate command can be
accepted by another logical rank within a 3DS DDR4 device.
This parameter is dependent on the memory density and is
necessary for proper hardware functionality.
tRFC
MEM_DDR4_TRFC_NS
tRFC refers to the Refresh Cycle Time. It is the amount of
delay after a refresh command before an activate command
can be accepted by the memory. This parameter is
dependent on the memory density and is necessary for
proper hardware functionality.
7.5.3.6 Stratix 10 EMIF IP DDR4 Parameters: Board
Table 340.
Group: Board / Intersymbol Interference/Crosstalk
Display Name
Identifier
Description
Address and command ISI/
crosstalk
BOARD_DDR4_USER_
AC_ISI_NS
The address and command window reduction due to ISI and
crosstalk effects. The number to be entered is the total loss
of margin on both the setup and hold sides (measured loss
on the setup side + measured loss on the hold side). Refer
to the EMIF Simulation Guidance wiki page for additional
information.
Read DQS/DQS# ISI/crosstalk
BOARD_DDR4_USER_
RCLK_ISI_NS
The reduction of the read data window due to ISI and
crosstalk effects on the DQS/DQS# signal when driven by
the memory device during a read. The number to be
entered is the total loss of margin on the setup and hold
sides (measured loss on the setup side + measured loss on
the hold side). Refer to the EMIF Simulation Guidance wiki
page for additional information.
Read DQ ISI/crosstalk
BOARD_DDR4_USER_
RDATA_ISI_NS
The reduction of the read data window due to ISI and
crosstalk effects on the DQ signal when driven by the
memory device during a read. The number to be entered is
the total loss of margin on the setup and hold side
(measured loss on the setup side + measured loss on the
hold side). Refer to the EMIF Simulation Guidance wiki page
for additional information.
Write DQS/DQS# ISI/crosstalk
BOARD_DDR4_USER_
WCLK_ISI_NS
The reduction of the write data window due to ISI and
crosstalk effects on the DQS/DQS# signal when driven by
the FPGA during a write. The number to be entered is the
total loss of margin on the setup and hold sides (measured
loss on the setup side + measured loss on the hold side).
Refer to the EMIF Simulation Guidance wiki page for
additional information.
Write DQ ISI/crosstalk
BOARD_DDR4_USER_
WDATA_ISI_NS
The reduction of the write data window due to ISI and
crosstalk effects on the DQ signal when driven by the FPGA
during a write. The number to be entered is the total loss of
margin on the setup and hold sides (measured loss on the
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
391
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
setup side + measured loss on the hold side). Refer to the
EMIF Simulation Guidance wiki page for additional
information.
Use default ISI/crosstalk
values
Table 341.
BOARD_DDR4_USE_D
EFAULT_ISI_VALUES
You can enable this option to use default intersymbol
interference and crosstalk values for your topology. Note
that the default values are not optimized for your board. For
optimal signal integrity, it is recommended that you do not
enable this parameter, but instead perform I/O simulation
using IBIS models and Hyperlynx*, and manually enter
values based on your simulation results, instead of using
the default values.
Group: Board / Board and Package Skews
Display Name
Identifier
Description
Average delay difference
between address/command
and CK
BOARD_DDR4_AC_TO
_CK_SKEW_NS
The average delay difference between the address/
command signals and the CK signal, calculated by averaging
the longest and smallest address/command signal trace
delay minus the maximum CK trace delay. Positive values
represent address and command signals that are longer
than CK signals and negative values represent address and
command signals that are shorter than CK signals.
Maximum board skew within
address/command bus
BOARD_DDR4_BRD_S
KEW_WITHIN_AC_NS
The largest skew between the address and command
signals.
Maximum board skew within
DQS group
BOARD_DDR4_BRD_S
KEW_WITHIN_DQS_N
S
The largest skew between all DQ and DM pins in a DQS
group. This value affects the read capture and write
margins.
Average delay difference
between DQS and CK
BOARD_DDR4_DQS_T
O_CK_SKEW_NS
The average delay difference between the DQS signals and
the CK signal, calculated by averaging the longest and
smallest DQS trace delay minus the CK trace delay. Positive
values represent DQS signals that are longer than CK
signals and negative values represent DQS signals that are
shorter than CK signals.
Package deskewed with board
layout (address/command
bus)
BOARD_DDR4_IS_SK
EW_WITHIN_AC_DES
KEWED
Enable this parameter if you are compensating for package
skew on the address, command, control, and memory clock
buses in the board layout. Include package skew in
calculating the following board skew parameters.
Package deskewed with board
layout (DQS group)
BOARD_DDR4_IS_SK
EW_WITHIN_DQS_DE
SKEWED
Enable this parameter if you are compensating for package
skew on the DQ, DQS, and DM buses in the board layout.
Include package skew in calculating the following board
skew parameters.
Maximum CK delay to DIMM/
device
BOARD_DDR4_MAX_C
K_DELAY_NS
The delay of the longest CK trace from the FPGA to any
DIMM/device.
Maximum DQS delay to DIMM/
device
BOARD_DDR4_MAX_D
QS_DELAY_NS
The delay of the longest DQS trace from the FPGA to any
DIMM/device
Maximum delay difference
between DIMMs/devices
BOARD_DDR4_SKEW_
BETWEEN_DIMMS_NS
The largest propagation delay on DQ signals between ranks
(applicable only when there is more than one rank). For
example: when you configure two ranks using one DIMM
there is a short distance between the ranks for the same DQ
pin; when you implement two ranks using two DIMMs the
distance is larger.
Maximum skew between DQS
groups
BOARD_DDR4_SKEW_
BETWEEN_DQS_NS
The largest skew between DQS signals.
7.5.3.7 Stratix 10 EMIF IP DDR4 Parameters: Controller
External Memory Interface Handbook Volume 2: Design Guidelines
392
7 Implementing and Parameterizing Memory IP
Table 342.
Group: Controller / Low Power Mode
Display Name
Identifier
Description
Auto Power-Down Cycles
CTRL_DDR4_AUTO_P
OWER_DOWN_CYCS
Specifies the number of idle controller cycles after which the
memory device is placed into power-down mode. You can
configure the idle waiting time. The supported range for
number of cycles is from 1 to 65534.
Enable Auto Power-Down
CTRL_DDR4_AUTO_P
OWER_DOWN_EN
Enable this parameter to have the controller automatically
place the memory device into power-down mode after a
specified number of idle controller clock cycles. The idle wait
time is configurable. All ranks must be idle to enter auto
power-down.
Table 343.
Group: Controller / Efficiency
Display Name
Identifier
Description
Address Ordering
CTRL_DDR4_ADDR_O
RDER_ENUM
Controls the mapping between Avalon addresses and
memory device addresses. By changing the value of this
parameter, you can change the mappings between the
Avalon-MM address and the DRAM address. (CS = chip
select, CID = chip ID in 3DS/TSV devices, BG = bank group
address, Bank = bank address, Row = row address, Col =
column address)
Enable Auto-Precharge Control
CTRL_DDR4_AUTO_PR
ECHARGE_EN
Select this parameter to enable the auto-precharge control
on the controller top level. If you assert the auto-precharge
control signal while requesting a read or write burst, you
can specify whether the controller should close (autoprecharge) the currently open page at the end of the read
or write burst, potentially making a future access to a
different page of the same bank faster.
Enable Reordering
CTRL_DDR4_REORDE
R_EN
Enable this parameter to allow the controller to perform
command and data reordering. Reordering can improve
efficiency by reducing bus turnaround time and row/bank
switching time. Data reordering allows the single-port
memory controller to change the order of read and write
commands to achieve highest efficiency. Command
reordering allows the controller to issue bank management
commands early based on incoming patterns, so that the
desired row in memory is already open when the command
reaches the memory interface. For more information, refer
to the Data Reordering topic in the EMIF Handbook.
Starvation limit for each
command
CTRL_DDR4_STARVE_
LIMIT
Specifies the number of commands that can be served
before a waiting command is served. The controller employs
a counter to ensure that all requests are served after a predefined interval -- this ensures that low priority requests are
not ignored, when doing data reordering for efficiency. The
valid range for this parameter is from 1 to 63. For more
information, refer to the Starvation Control topic in the EMIF
Handbook.
Enable Command Priority
Control
CTRL_DDR4_USER_PR
IORITY_EN
Select this parameter to enable user-requested command
priority control on the controller top level. This parameter
instructs the controller to treat a read or write request as
high-priority. The controller attempts to fill high-priority
requests sooner, to reduce latency. Connect this interface to
the conduit of your logic block that determines when the
external memory interface IP treats the read or write
request as a high-priority command.
External Memory Interface Handbook Volume 2: Design Guidelines
393
7 Implementing and Parameterizing Memory IP
Table 344.
Group: Controller / Configuration, Status, and Error Handling
Display Name
Identifier
Description
Enable Auto Error Correction
CTRL_DDR4_ECC_AUT
O_CORRECTION_EN
Specifies that the controller perform auto correction when a
single-bit error is detected by the ECC logic.
Enable Error Detection and
Correction Logic with ECC
CTRL_DDR4_ECC_EN
Enables error-correction code (ECC) for single-bit error
correction and double-bit error detection. Your memory
interface must have a width of 16, 24, 40, or 72 bits to use
ECC. ECC is implemented as soft logic.
Enable Memory-Mapped
Configuration and Status
Register (MMR) Interface
CTRL_DDR4_MMR_EN
Enable this parameter to change or read memory timing
parameters, memory address size, mode register settings,
controller status, and request sideband operations.
Table 345.
Group: Controller / Data Bus Turnaround Time
Display Name
Identifier
Description
Additional read-to-read
turnaround time (different
ranks)
CTRL_DDR4_RD_TO_
RD_DIFF_CHIP_DELTA
_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a read of another logical rank. This can
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (different
ranks)
CTRL_DDR4_RD_TO_
WR_DIFF_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional read-to-write
turnaround time (same rank)
CTRL_DDR4_RD_TO_
WR_SAME_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a read to a write
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
Additional write-to-read
turnaround time (different
ranks)
CTRL_DDR4_WR_TO_
RD_DIFF_CHIP_DELTA
_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a read of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
Additional write-to-read
turnaround time (same rank)
CTRL_DDR4_WR_TO_
RD_SAME_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write to a read
within the same logical rank. This can help resolve bus
contention problems specific to your board topology. The
value is added to the default which is calculated
automatically. Use the default setting unless you suspect a
problem exists.
Additional write-to-write
turnaround time (different
ranks)
CTRL_DDR4_WR_TO_
WR_DIFF_CHIP_DELT
A_CYCS
Specifies additional number of idle controller (not DRAM)
cycles when switching the data bus from a write of one
logical rank to a write of another logical rank. This can help
resolve bus contention problems specific to your board
topology. The value is added to the default which is
calculated automatically. Use the default setting unless you
suspect a problem exists.
External Memory Interface Handbook Volume 2: Design Guidelines
394
7 Implementing and Parameterizing Memory IP
7.5.3.8 Stratix 10 EMIF IP DDR4 Parameters: Diagnostics
Table 346.
Group: Diagnostics / Simulation Options
Display Name
Identifier
Description
Abstract phy for fast simulation
DIAG_DDR4_ABSTRA
CT_PHY
Specifies that the system use Abstract PHY for simulation.
Abstract PHY replaces the PHY with a model for fast
simulation and can reduce simulation time by 2-3 times.
Abstract PHY is available for certain protocols and device
families, and only when you select Skip Calibration.
Calibration mode
DIAG_SIM_CAL_MODE
_ENUM
Specifies whether to skip memory interface calibration
during simulation, or to simulate the full calibration process.
Simulating the full calibration process can take hours (or
even days), depending on the width and depth of the
memory interface. You can achieve much faster simulation
times by skipping the calibration process, but that is only
expected to work when the memory model is ideal and the
interconnect delays are zero. If you enable this parameter,
the interface still performs some memory initialization
before starting normal operations. Abstract PHY is
supported with skip calibration.
Table 347.
Group: Diagnostics / Calibration Debug Options
Display Name
Identifier
Description
Skip address/command
deskew calibration
DIAG_DDR4_SKIP_CA
_DESKEW
Specifies to skip the address/command deskew calibration
stage. Address/command deskew performs per-bit deskew
for the address and command pins.
Skip address/command
leveling calibration
DIAG_DDR4_SKIP_CA
_LEVEL
Specifies to skip the address/command leveling stage
during calibration. Address/command leveling attempts to
center the memory clock edge against CS# by adjusting
delay elements inside the PHY, and then applying the same
delay offset to the rest of the address and command pins.
Skip VREF calibration
DIAG_DDR4_SKIP_VR
EF_CAL
Specifies to skip the VREF stage of calibration. Enable this
parameter for debug purposes only; generally, you should
include the VREF calibration stage during normal operation.
Enable Daisy-Chaining for
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_MASTER
Specifies that the IP export an Avalon-MM master interface
(cal_debug_out) which can connect to the cal_debug
interface of other EMIF cores residing in the same I/O
column. This parameter applies only if the EMIF Debug
Toolkit or On-Chip Debug Port is enabled. Refer to the
Debugging Multiple EMIFs wiki page for more information
about debugging multiple EMIFs.
Quartus Prime EMIF Debug
Toolkit/On-Chip Debug Port
DIAG_EXPORT_SEQ_A
VALON_SLAVE
Specifies the connectivity of an Avalon slave interface for
use by the Quartus Prime EMIF Debug Toolkit or user core
logic. If you set this parameter to "Disabled," no debug
features are enabled. If you set this parameter to "Export,"
an Avalon slave interface named "cal_debug" is exported
from the IP. To use this interface with the EMIF Debug
Toolkit, you must instantiate and connect an EMIF debug
interface IP core to it, or connect it to the cal_debug_out
interface of another EMIF core. If you select "Add EMIF
Debug Interface", an EMIF debug interface component
containing a JTAG Avalon Master is connected to the debug
port, allowing the core to be accessed by the EMIF Debug
Toolkit. Only one EMIF debug interface should be
instantiated per I/O column. You can chain additional EMIF
or PHYLite cores to the first by enabling the "Enable DaisyChaining for Quartus Prime EMIF Debug Toolkit/On-Chip
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
395
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Debug Port" option for all cores in the chain, and selecting
"Export" for the "Quartus Prime EMIF Debug Toolkit/On-Chip
Debug Port" option on all cores after the first.
Interface ID
DIAG_INTERFACE_ID
Identifies interfaces within the I/O column, for use by the
EMIF Debug Toolkit and the On-Chip Debug Port. Interface
IDs should be unique among EMIF cores within the same
I/O column. If the Quartus Prime EMIF Debug Toolkit/OnChip Debug Port parameter is set to Disabled, the interface
ID is unused.
Use Soft NIOS Processor for
On-Chip Debug
DIAG_SOFT_NIOS_M
ODE
Enables a soft Nios processor as a peripheral component to
access the On-Chip Debug Port. Only one interface in a
column can activate this option.
Table 348.
Group: Diagnostics / Example Design
Display Name
Identifier
Description
Enable In-System-Sourcesand-Probes
DIAG_EX_DESIGN_IS
SP_EN
Enables In-System-Sources-and-Probes in the example
design for common debug signals, such as calibration status
or example traffic generator per-bit status. This parameter
must be enabled if you want to do driver margining.
Number of core clocks sharing
slaves to instantiate in the
example design
DIAG_EX_DESIGN_NU
M_OF_SLAVES
Specifies the number of core clock sharing slaves to
instantiate in the example design. This parameter applies
only if you set the "Core clocks sharing" parameter in the
"General" tab to Master or Slave.
Table 349.
Group: Diagnostics / Traffic Generator
Display Name
Identifier
Description
Bypass the default traffic
pattern
DIAG_BYPASS_DEFAU
LT_PATTERN
Specifies that the controller/interface bypass the traffic
generator 2.0 default pattern after reset. If you do not
enable this parameter, the traffic generator does not assert
a pass or fail status until the generator is configured and
signaled to start by its Avalon configuration interface.
Bypass the traffic generator
repeated-writes/repeatedreads test pattern
DIAG_BYPASS_REPEA
T_STAGE
Specifies that the controller/interface bypass the traffic
generator's repeat test stage. If you do not enable this
parameter, every write and read is repeated several times.
Bypass the traffic generator
stress pattern
DIAG_BYPASS_STRES
S_STAGE
Specifies that the controller/interface bypass the traffic
generator's stress pattern stage. (Stress patterns are meant
to create worst-case signal integrity patterns on the data
pins.) If you do not enable this parameter, the traffic
generator does not assert a pass or fail status until the
generator is configured and signaled to start by its Avalon
configuration interface.
Bypass the user-configured
traffic stage
DIAG_BYPASS_USER_
STAGE
Specifies that the controller/interface bypass the userconfigured traffic generator's pattern after reset. If you do
not enable this parameter, the traffic generator does not
assert a pass or fail status until the generator is configured
and signaled to start by its Avalon configuration interface.
Configuration can be done by connecting to the traffic
generator via the EMIF Debug Toolkit, or by using custom
logic connected to the Avalon-MM configuration slave port
on the traffic generator. Configuration can also be simulated
using the example testbench provided in the
altera_emif_avl_tg_2_tb.sv file.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
396
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
Run diagnostic on infinite test
duration
DIAG_INFI_TG2_ERR_
TEST
Specifies that the traffic generator run indefinitely until the
first error is detected.
Export Traffic Generator 2.0
configuration interface
DIAG_TG_AVL_2_EXP
ORT_CFG_INTERFACE
Specifies that the IP export an Avalon-MM slave port for
configuring the Traffic Generator. This is required only if you
are configuring the traffic generator through user logic and
not through through the EMIF Debug Toolkit.
Use configurable Avalon traffic
generator 2.0
DIAG_USE_TG_AVL_2
This option allows users to add the new configurable Avalon
traffic generator to the example design.
Table 350.
Group: Diagnostics / Performance
Display Name
Enable Efficiency Monitor
Table 351.
Identifier
Description
DIAG_EFFICIENCY_M
ONITOR
Adds an Efficiency Monitor component to the Avalon-MM
interface of the memory controller, allowing you to view
efficiency statistics of the interface. You can access the
efficiency statistics using the EMIF Debug Toolkit.
Group: Diagnostics / Miscellaneous
Display Name
Identifier
Description
Use short Qsys interface names
SHORT_QSYS_INTERF
ACE_NAMES
Specifies the use of short interface names, for improved
usability and consistency with other Qsys components. If
this parameter is disabled, the names of Qsys interfaces
exposed by the IP will include the type and direction of the
interface. Long interface names are supported for
backward-compatibility and will be removed in a future
release.
7.5.3.9 Stratix 10 EMIF IP DDR4 Parameters: Example Designs
Table 352.
Group: Example Designs / Available Example Designs
Display Name
Select design
Table 353.
Identifier
Description
EX_DESIGN_GUI_DDR
4_SEL_DESIGN
Specifies the creation of a full Quartus Prime project,
instantiating an external memory interface and an example
traffic generator, according to your parameterization. After
the design is created, you can specify the target device and
pin location assignments, run a full compilation, verify
timing closure, and test the interface on your board using
the programming file created by the Quartus Prime
assembler. The 'Generate Example Design' button lets you
generate simulation or synthesis file sets.
Group: Example Designs / Example Design Files
Display Name
Simulation
Identifier
Description
EX_DESIGN_GUI_DDR
4_GEN_SIM
Specifies that the 'Generate Example Design' button create
all necessary file sets for simulation. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, simulation file sets are not created.
Instead, the output directory will contain the ed_sim.qsys
file which holds Qsys details of the simulation example
design, and a make_sim_design.tcl file with other
corresponding tcl files. You can run make_sim_design.tcl
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
397
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
from a command line to generate the simulation example
design. The generated example designs for various
simulators are stored in the /sim sub-directory.
Synthesis
Table 354.
EX_DESIGN_GUI_DDR
4_GEN_SYNTH
Group: Example Designs / Generated HDL Format
Display Name
Simulation HDL format
Table 355.
Specifies that the 'Generate Example Design' button create
all necessary file sets for synthesis. Expect a short
additional delay as the file set is created. If you do not
enable this parameter, synthesis file sets are not created.
Instead, the output directory will contain the ed_synth.qsys
file which holds Qsys details of the synthesis example
design, and a make_qii_design.tcl script with other
corresponding tcl files. You can run make_qii_design.tcl
from a command line to generate the synthesis example
design. The generated example design is stored in the /qii
sub-directory.
Identifier
EX_DESIGN_GUI_DDR
4_HDL_FORMAT
Description
This option lets you choose the format of HDL in which
generated simulation files are created.
Group: Example Designs / Target Development Kit
Display Name
Select board
Identifier
Description
EX_DESIGN_GUI_DDR
4_TARGET_DEV_KIT
Specifies that when you select a development kit with a
memory module, the generated example design contains all
settings and fixed pin assignments to run on the selected
board. You must select a development kit preset to
generate a working example design for the specified
development kit. Any IP settings not applied directly from a
development kit preset will not have guaranteed results
when testing the development kit. To exclude hardware
support of the example design, select 'none' from the
'Select board' pull down menu. When you apply a
development kit preset, all IP parameters are automatically
set appropriately to match the selected preset. If you want
to save your current settings, you should do so before you
apply the preset. You can save your settings under a
different name using File->Save as.
7.5.3.10 About Memory Presets
Presets help simplify the process of copying memory parameter values from memory
device data sheets to the EMIF parameter editor.
For DDRx protocols, the memory presets are named using the following convention:
PROTOCOL-SPEEDBIN LATENCY FORMAT-AND-TOPOLOGY CAPACITY (INTERNAL-ORGANIZATION)
For example, the preset named DDR4-2666U CL18 Component 1CS 2Gb (512Mb
x 4) refers to a DDR4 x4 component rated at the DDR4-2666U JEDEC speed bin, with
nominal CAS latency of 18 cycles, one chip-select, and a total memory space of 2Gb.
The JEDEC memory specification defines multiple speed bins for a given frequency
(that is, DDR4-2666U and DDR4-2666V). You may be able to determine the exact
speed bin implemented by your memory device using its nominal latency. When in
doubt, contact your memory vendor.
External Memory Interface Handbook Volume 2: Design Guidelines
398
7 Implementing and Parameterizing Memory IP
For RLDRAMx and QDRx protocols, the memory presets are named based on the
vendor's device part number.
When the preset list does not contain the exact configuration required, you can still
minimize data entry by selecting the preset closest to your configuration and then
modify parameters as required.
Prior to production you should always review the parameter values to ensure that they
match your memory device data sheet, regardless of whether a preset is used or not.
Incorrect memory parameters can cause functional failures.
7.5.4 Stratix 10 EMIF IP DDR3 Parameters
The Stratix 10 EMIF IP parameter editor allows you to parameterize settings for the
Stratix 10 EMIF IP.
The text window at the bottom of the parameter editor displays information about the
memory interface, as well as warning and error messages. You should correct any
errors indicated in this window before clicking the Finish button.
Note:
Default settings are the minimum required to achieve timing, and may vary depending
on memory protocol.
The following tables describe the parameterization settings available in the parameter
editor for the Stratix 10 EMIF IP.
7.5.4.1 Stratix 10 EMIF IP DDR3 Parameters: General
Table 356.
Group: General / FPGA
Display Name
Speed grade
Table 357.
Identifier
Description
PHY_FPGA_SPEEDGRA
DE_GUI
Indicates the device speed grade, and whether it is an
engineering sample (ES) or production device. This value is
based on the device that you select in the parameter editor.
If you do not specify a device, the system assumes a
default value. Ensure that you always specify the correct
device during IP generation, otherwise your IP may not
work in hardware.
Group: General / Interface
Display Name
Identifier
Description
Configuration
PHY_CONFIG_ENUM
Specifies the configuration of the memory interface. The
available options depend on the protocol in use. Options
include Hard PHY and Hard Controller, Hard PHY and Soft
Controller, or Hard PHY only. If you select Hard PHY only,
the AFI interface is exported to allow connection of a
custom memory controller or third-party IP.
Instantiate two controllers
sharing a Ping Pong PHY
PHY_PING_PONG_EN
Specifies the instantiation of two identical memory
controllers that share an address/command bus through the
use of Ping Pong PHY. This parameter is available only if you
specify the Hard PHY and Hard Controller option. When this
parameter is enabled, the IP exposes two independent
Avalon interfaces to the user logic, and a single external
memory interface with double width for the data bus and
the CS#, CKE, ODT, and CK/CK# signals.
External Memory Interface Handbook Volume 2: Design Guidelines
399
7 Implementing and Parameterizing Memory IP
Table 358.
Group: General / Clocks
Display Name
Identifier
Description
Core clocks sharing
PHY_CORE_CLKS_SHA
RING_ENUM
When a design contains multiple interfaces of the same
protocol, rate, frequency, and PLL reference clock source,
they can share a common set of core clock domains. By
sharing core clock domains, they reduce clock network
usage and avoid clock synchronization logic between the
interfaces. To share core clocks, denote one of the
interfaces as "Master", and the remaining interfaces as
"Slave". In the RTL, connect the clks_sharing_master_out
signal from the master interface to the
clks_sharing_slave_in signal of all the slave interfaces. Both
master and slave interfaces still expose their own output
clock ports in the RTL (for example, emif_usr_clk, afi_clk),
but the physical signals are equivalent, hence it does not
matter whether a clock port from a master or a slave is
used. As the combined width of all interfaces sharing the
same core clock increases, you may encounter timing
closure difficulty for transfers between the FPGA core and
the periphery.
Use recommended PLL
reference clock frequency
PHY_DDR3_DEFAULT_
REF_CLK_FREQ
Specifies that the PLL reference clock frequency is
automatically calculated for best performance. If you want
to specify a different PLL reference clock frequency, uncheck
the check box for this parameter.
Memory clock frequency
PHY_MEM_CLK_FREQ_
MHZ
Specifies the operating frequency of the memory interface
in MHz. If you change the memory frequency, you should
update the memory latency parameters on the "Memory"
tab and the memory timing parameters on the "Mem
Timing" tab.
Clock rate of user logic
PHY_RATE_ENUM
Specifies the relationship between the user logic clock
frequency and the memory clock frequency. For example, if
the memory clock sent from the FPGA to the memory
device is toggling at 800MHz, a quarter-rate interface
means that the user logic in the FPGA runs at 200MHz.
PLL reference clock frequency
PHY_REF_CLK_FREQ_
MHZ
Specifies the PLL reference clock frequency. You must
configure this parameter only if you do not check the "Use
recommended PLL reference clock frequency" parameter. To
configure this parameter, select a valid PLL reference clock
frequency from the list. The values in the list can change if
you change the memory interface frequency and/or the
clock rate of the user logic. For best jitter performance, you
should use the fastest possible PLL reference clock
frequency.
PLL reference clock jitter
PHY_REF_CLK_JITTER
_PS
Specifies the peak-to-peak jitter on the PLL reference clock
source. The clock source of the PLL reference clock must
meet or exceed the following jitter requirements: 10ps peak
to peak, or 1.42ps RMS at 1e-12 BER, 1.22ps at 1e-16 BER.
Specify additional core clocks
based on existing PLL
PLL_ADD_EXTRA_CLK
S
Displays additional parameters allowing you to create
additional output clocks based on the existing PLL. This
parameter provides an alternative clock-generation
mechanism for when your design exhausts available PLL
resources. The additional output clocks that you create can
be fed into the core. Clock signals created with this
parameter are synchronous to each other, but asynchronous
to the memory interface core clock domains (such as
emif_usr_clk or afi_clk). You must follow proper clockdomain-crossing techniques when transferring data between
clock domains.
External Memory Interface Handbook Volume 2: Design Guidelines
400
7 Implementing and Parameterizing Memory IP
Table 359.
Group: General / Additional Core Clocks
Display Name
Number of additional core
clocks
Identifier
Description
PLL_USER_NUM_OF_E
XTRA_CLKS
Specifies the number of additional output clocks to create
from the PLL.
7.5.4.2 Stratix 10 EMIF IP DDR3 Parameters: Memory
Table 360.
Group: Memory / Topology
Display Name
Identifier
Description
DQS group of ALERT#
MEM_DDR3_ALERT_N
_DQS_GROUP
Select the DQS group with which the ALERT# pin is placed.
ALERT# pin placement
MEM_DDR3_ALERT_N
_PLACEMENT_ENUM
Specifies placement for the mem_alert_n signal. If you
select "I/O Lane with Address/Command Pins", you can pick
the I/O lane and pin index in the add/cmd bank with the
subsequent drop down menus. If you select "I/O Lane with
DQS Group", you can specify the DQS group with which to
place the mem_alert_n pin. If you select "Automatically
select a location", the IP automatically selects a pin for the
mem_alert_n signal. If you select this option, no additional
location constraints can be applied to the mem_alert_n pin,
or a fitter error will result during compilation. For optimum
signal integrity, you should choose "I/O Lane with Address/
Command Pins". For interfaces containing multiple memory
devices, it is recommended to connect the ALERT# pins
together to the ALERT#pin on the FPGA.
Bank address width
MEM_DDR3_BANK_AD
DR_WIDTH
Specifies the number of bank address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of bank
address pins needed for access to all available banks.
Number of clocks
MEM_DDR3_CK_WIDT
H
Specifies the number of CK/CK# clock pairs exposed by the
memory interface. Usually more than 1 pair is required for
RDIMM/LRDIMM formats. The value of this parameter
depends on the memory device selected; refer to the data
sheet for your memory device.
Column address width
MEM_DDR3_COL_ADD
R_WIDTH
Specifies the number of column address pins. Refer to the
data sheet for your memory device. The density of the
selected memory device determines the number of address
pins needed for access to all available columns.
Number of chip selects per
DIMM
MEM_DDR3_CS_PER_
DIMM
Specifies the number of chip selects per DIMM.
Number of chip selects
MEM_DDR3_DISCRET
E_CS_WIDTH
Specifies the total number of chip selects in the interface,
up to a maximum of 4. This parameter applies to discreet
components only.
Enable DM pins
MEM_DDR3_DM_EN
Indicates whether the interface uses data mask (DM) pins.
This feature allows specified portions of the data bus to be
written to memory (not available in x4 mode). One DM pin
exists per DQS group
Number of DQS groups
MEM_DDR3_DQS_WI
DTH
Specifies the total number of DQS groups in the interface.
This value is automatically calculated as the DQ width
divided by the number of DQ pins per DQS group.
DQ pins per DQS group
MEM_DDR3_DQ_PER_
DQS
Specifies the total number of DQ pins per DQS group.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
401
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
DQ width
MEM_DDR3_DQ_WIDT
H
Specifies the total number of data pins in the interface. The
maximum supported width is 144, or 72 in Ping Pong PHY
mode.
Memory format
MEM_DDR3_FORMAT_
ENUM
Specifies the format of the external memory device. The
following formats are supported: Component - a Discrete
memory device; UDIMM - Unregistered/Unbuffered DIMM
where address/control, clock, and data are unbuffered;
RDIMM - Registered DIMM where address/control and clock
are buffered; LRDIMM - Load Reduction DIMM where
address/control, clock, and data are buffered. LRDIMM
reduces the load to increase memory speed and supports
higher densities than RDIMM; SODIMM - Small Outline
DIMM is similar to UDIMM but smaller in size and is typically
used for systems with limited space. Some memory
protocols may not be available in all formats.
Number of DIMMs
MEM_DDR3_NUM_OF_
DIMMS
Total number of DIMMs.
Number of physical ranks per
DIMM
MEM_DDR3_RANKS_P
ER_DIMM
Number of ranks per DIMM. For LRDIMM, this represents
the number of physical ranks on the DIMM behind the
memory buffer
Number of rank multiplication
pins
MEM_DDR3_RM_WIDT
H
Number of rank multiplication pins used to access all
physical ranks on an LRDIMM. Rank multiplication is a ratio
between the number of physical ranks for an LRDIMM and
the number of logical ranks for the controller. These pins
should be connected to CS#[2] and/or CS#[3] of all
LRDIMMs in the system
Row address width
MEM_DDR3_ROW_AD
DR_WIDTH
Specifies the number of row address pins. Refer to the data
sheet for your memory device. The density of the selected
memory device determines the number of address pins
needed for access to all available rows.
Table 361.
Group: Memory / Latency and Burst
Display Name
Identifier
Description
Memory additive CAS latency
setting
MEM_DDR3_ATCL_EN
UM
Determines the posted CAS additive latency of the memory
device. Enable this feature to improve command and bus
efficiency, and increase system bandwidth.
Burst Length
MEM_DDR3_BL_ENUM
Specifies the DRAM burst length which determines how
many consecutive addresses should be accessed for a given
read/write command.
Read Burst Type
MEM_DDR3_BT_ENUM
Indicates whether accesses within a given burst are in
sequential or interleaved order. Select sequential if you are
using the Intel-provided memory controller.
Memory CAS latency setting
MEM_DDR3_TCL
Specifies the number of clock cycles between the read
command and the availability of the first bit of output data
at the memory device. Overall read latency equals the
additive latency (AL) + the CAS latency (CL). Overall read
latency depends on the memory device selected; refer to
the datasheet for your device.
Memory write CAS latency
setting
MEM_DDR3_WTCL
Specifies the number of clock cycles from the release of
internal write to the latching of the first data in at the
memory device. This value depends on the memory device
selected; refer to the datasheet for your device.
External Memory Interface Handbook Volume 2: Design Guidelines
402
7 Implementing and Parameterizing Memory IP
Table 362.
Group: Memory / Mode Register Settings
Display Name
Identifier
Description
Auto self-refresh method
MEM_DDR3_ASR_ENU
M
Indicates whether to enable or disable auto self-refresh.
Auto self-refresh allows the controller to issue self-refresh
requests, rather than manually issuing self-refresh in order
for memory to retain data.
DDR3 LRDIMM additional
control words
MEM_DDR3_LRDIMM_
EXTENDED_CONFIG
Each 4-bit setting can be obtained from the manufacturer's
data sheet and should be entered in hexadecimal, starting
with BC0F on the left and ending with BC00 on the right
DLL precharge power down
MEM_DDR3_PD_ENUM
Specifies whether the DLL in the memory device is off or on
during precharge power-down
DDR3 RDIMM/LRDIMM control
words
MEM_DDR3_RDIMM_C
ONFIG
Each 4-bit/8-bit setting can be obtained from the
manufacturer's data sheet and should be entered in
hexadecimal, starting with the 8-bit setting RCBx on the left
and continuing to RC1x followed by the 4-bit setting RCOF
and ending with RC00 on the right
Self-refresh temperature
MEM_DDR3_SRT_ENU
M
Specifies the self-refresh temperature as "Normal" or
"Extended" mode. More information on Normal and
Extended temperature modes can be found in the memory
device datasheet.
7.5.4.3 Stratix 10 EMIF IP DDR3 Parameters: Mem I/O
Table 363.
Group: Mem I/O / Memory I/O Settings
Display Name
Identifier
Description
Output drive strength setting
MEM_DDR3_DRV_STR
_ENUM
Specifies the output driver impedance setting at the
memory device. To obtain optimum signal integrity
performance, select option based on board simulation
results.
ODT Rtt nominal value
MEM_DDR3_RTT_NOM
_ENUM
Determines the nominal on-die termination value applied to
the DRAM. The termination is applied any time that ODT is
asserted. If you specify a different value for RTT_WR, that
value takes precedence over the values mentioned here. For
optimum signal integrity performance, select your option
based on board simulation results.
Dynamic ODT (Rtt_WR) value
MEM_DDR3_RTT_WR_
ENUM
Specifies the mode of the dynamic on-die termination (ODT)
during writes to the memory device (used for multi-rank
configurations). For optimum signal integrity performance,
select this option based on board simulation results.
Table 364.
Group: Mem I/O / ODT Activation
Display Name
Use Default ODT Assertion
Tables
Identifier
Description
MEM_DDR3_USE_DEF
AULT_ODT
Enables the default ODT assertion pattern as determined
from vendor guidelines. These settings are provided as a
default only; you should simulate your memory interface to
determine the optimal ODT settings and assertion patterns.
7.5.4.4 Stratix 10 EMIF IP DDR3 Parameters: FPGA I/O
You should use Hyperlynx* or similar simulators to determine the best settings for
your board. Refer to the EMIF Simulation Guidance wiki page for additional
information.
External Memory Interface Handbook Volume 2: Design Guidelines
403
7 Implementing and Parameterizing Memory IP
Table 365.
Group: FPGA IO / FPGA IO Settings
Display Name
Identifier
Description
Use default I/O settings
PHY_DDR3_DEFAULT_
IO
Specifies that a legal set of I/O settings are automatically
selected. The default I/O settings are not necessarily
optimized for a specific board. To achieve optimal signal
integrity, perform I/O simulations with IBIS models and
enter the I/O settings manually, based on simulation
results.
Voltage
PHY_DDR3_IO_VOLTA
GE
The voltage level for the I/O pins driving the signals
between the memory device and the FPGA memory
interface.
Periodic OCT re-calibration
PHY_USER_PERIODIC
_OCT_RECAL_ENUM
Specifies that the system periodically recalibrate on-chip
termination (OCT) to minimize variations in termination
value caused by changing operating conditions (such as
changes in temperature). By recalibrating OCT, I/O timing
margins are improved. When enabled, this parameter
causes the PHY to halt user traffic about every 0.5 seconds
for about 1900 memory clock cycles, to perform OCT
recalibration. Efficiency is reduced by about 1% when this
option is enabled.
Table 366.
Group: FPGA IO / Address/Command
Display Name
Identifier
Description
I/O standard
PHY_DDR3_USER_AC
_IO_STD_ENUM
Specifies the I/O electrical standard for the address/
command pins of the memory interface. The selected I/O
standard configures the circuit within the I/O buffer to
match the industry standard.
Output mode
PHY_DDR3_USER_AC
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR3_USER_AC
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
Table 367.
Group: FPGA IO / Memory Clock
Display Name
Identifier
Description
I/O standard
PHY_DDR3_USER_CK
_IO_STD_ENUM
Specifies the I/O electrical standard for the memory clock
pins. The selected I/O standard configures the circuit within
the I/O buffer to match the industry standard.
Output mode
PHY_DDR3_USER_CK
_MODE_ENUM
This parameter allows you to change the current drive
strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Slew rate
PHY_DDR3_USER_CK
_SLEW_RATE_ENUM
Specifies the slew rate of the address/command output
pins. The slew rate (or edge rate) describes how quickly the
signal can transition, measured in voltage per unit time.
Perform board simulations to determine the slew rate that
provides the best eye opening for the address and
command signals.
External Memory Interface Handbook Volume 2: Design Guidelines
404
7 Implementing and Parameterizing Memory IP
Table 368.
Group: FPGA IO / Data Bus
Display Name
Identifier
Use recommended initial Vrefin
PHY_DDR3_USER_AU
TO_STARTING_VREFI
N_EN
Specifies that the initial Vrefin setting is calculated
automatically, to a reasonable value based on termination
settings.
Input mode
PHY_DDR3_USER_DA
TA_IN_MODE_ENUM
This parameter allows you to change the input termination
settings for the selected I/O standard. Perform board
simulation with IBIS models to determine the best settings
for your design.
I/O standard
PHY_DDR3_USER_DA
TA_IO_STD_ENUM
Specifies the I/O electrical standard for the data and data
clock/strobe pins of the memory interface. The selected I/O
standard option configures the circuit within the I/O buffer
to match the industry standard.
Output mode
PHY_DDR3_USER_DA
TA_OUT_MODE_ENUM
This parameter allows you to change the output current
drive strength or termination settings for the selected I/O
standard. Perform board simulation with IBIS models to
determine the best settings for your design.
Initial Vrefin
PHY_DDR3_USER_STA
RTING_VREFIN
Specifies the initial value for the reference voltage on the
data pins (Vrefin). This value is entered as a percentage of
the supply voltage level on the I/O pins. The specified value
serves as a starting point and may be overridden by
calibration to provide better timing margins. If you choose
to skip Vref calibration (Diagnostics tab), this is the value
that is used as the Vref for the interface.
Table 369.
Description
Group: FPGA IO / PHY Inputs
Display Name
Identifier
Description
PLL reference clock I/O
standard
PHY_DDR3_USER_PLL
_REF_CLK_IO_STD_E
NUM
Specifies the I/O standard for the PLL reference clock of the
memory interface.
RZQ I/O standard
PHY_DDR3_USER_RZ
Q_IO_STD_ENUM
Specifies the I/O standard for the RZQ pin used in the
memory interface.
RZQ resistor
PHY_RZQ
Specifies the reference resistor used to calibrate the on-chip
termination value. You should connect the RZQ pin to GND
through an external resistor of the specified value.
7.5.4.5 Stratix 10 EMIF IP DDR3 Parameters: Mem Timing
These parameters should be read from the table in the datasheet associated with the
speed bin of the memory device (not necessarily the frequency at which the interface
is running).
Table 370.
Group: Mem Timing / Parameters dependent on Speed Bin
Display Name
Identifier
Description
Speed bin
MEM_DDR3_SPEEDBI
N_ENUM
The speed grade of the memory device used. This
parameter refers to the maximum rate at which the
memory device is specified to run.
tDH (base) DC level
MEM_DDR3_TDH_DC_
MV
tDH (base) DC level refers to the voltage level which the
data bus must not cross during the hold window. The signal
is considered stable only if it remains above this voltage
level (for a logic 1) or below this voltage level (for a logic 0)
for the entire hold period.
tDH (base)
MEM_DDR3_TDH_PS
tDH (base) refers to the hold time for the Data (DQ) bus
after the rising edge of CK.
continued...
External Memory Interface Handbook Volume 2: Design Guidelines
405
7 Implementing and Parameterizing Memory IP
Display Name
Identifier
Description
tDQSCK
MEM_DDR3_TDQSCK_
PS
tDQSCK describes the skew between the memory clock (CK)
and the input data strobes (DQS) used for reads. It is the
time between the rising data strobe edge (DQS, DQS#)
relative to the rising CK edge.
tDQSQ
MEM_DDR3_TDQSQ_P
S
tDQSQ describes the latest valid transition of the associated
DQ pins for a READ. tDQSQ specifically refers to the DQS,
DQS# to DQ ske