Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Application


Add to my manuals
32 Pages

advertisement

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Application | Manualzz

Using a Dell™ PowerEdge™ M1000e

System with an AMCC QT2025

Backplane PHY in a 10GBASE-KR

Application

Bhavesh Patel (Dell)

Marika Herod (AMCC)

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN

TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS,

WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

© 2009 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the express written permission of Dell, Inc. is strictly forbidden. For more information, contact Dell.

Trademarks used in this text: Dell, the DELL logo, and PowerEdge are trademarks of Dell Inc.; AMCC is a registered trademark of Applied Micro Circuits Corporation. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell

Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

Page ii

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Table of Contents

Introduction.................................................................................................................................................... 2

Executive Summary ...................................................................................................................................... 2

M1000e Architecture ..................................................................................................................................... 2

Midplane Fabric Connections.................................................................................................................... 3

I/O Communication Paths in the M1000e ................................................................................................. 4

Midplane Design Overview ........................................................................................................................... 5

Connectors ................................................................................................................................................ 5

PCB material ............................................................................................................................................. 6

Midplane Analysis ..................................................................................................................................... 6

IEEE 802.3ap Requirements for Backplane Channel ............................................................................... 7

Measured M1000e Channels .................................................................................................................... 9

Slot 1

– IOM B1 ......................................................................................................................................... 9

Slot 14

–IOM B1 ....................................................................................................................................... 14

Slot 11-IOM B1 ........................................................................................................................................ 17

Slot11- IOM C2 ........................................................................................................................................ 20

Summary of M1000e channels ............................................................................................................... 21

Overview of the AMCC QT2025 PHY ......................................................................................................... 21

Performance Metrics for the M1000e Midplane ...................................................................................... 22

QT2025 Performance over M1000e Midplane ............................................................................................ 25

BER Test ................................................................................................................................................. 25

Receiver Interference Tolerance Test ..................................................................................................... 27

Repeatability Tests .................................................................................................................................. 29

Conclusions ............................................................................................................................................. 29

References .................................................................................................................................................. 30

Page 1

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Introduction

The Dell PowerEdge M1000e Modular Server Enclosure chassis is designed to support data rates up to and including 10 Gbps per lane, which equates to 40 Gbps per port. This white paper examines how this architecture meets the IEEE 802.3ap KR specification by demonstrating bit error rate (BER) and interference tolerance performance using an AMCC QT2025 transceiver, which is a XAUI to 10GBASE-

KR physical layer (PHY) IC. The results show that the M1000e was designed for maximum longevity and reliability by supporting a broad range of standard I/O protocols and a variety of data rates on the midplane.

Executive Summary

The M1000e is designed to support multiple ports of I/O connectivity from the server blade. It supports data rates up to 10 Gbps per lane across a broad range of standard I/O protocols such as Ethernet, Fibre

Channel, Infiniband, SAS (Serial Attached SCSI), and PCI Express (Peripheral Component Interconnect).

Each half-height server blade supports two mezzanine cards and full height server blade supports four mezzanine cards, and each mezzanine card can support 40 Gbps per port where each port consists of four lanes. The key to enabling this capability is the channel design, which includes a passive path from mezzanine, blade, midplane, and IOM.

Throughout this white paper, we reference the IEEE 802.3ap specification, which outlines the different channel characteristics that must be met for a channel to operate with a bit error rate of less than 1E-12

(BER <1E-12). We explain how the M1000e channel was designed to meet this specification, and we compare its performance metrics to standards defined in the specification.

According to the IEEE 802.3p specification, meeting the channel requirements indicates a strong likelihood that the channel will meet a BER of less than 1E-12 when using a normative transmitter and receiver. Dell partnered with AMCC to use their QT2025, which is compatible with the IEEE 802.3ap specification, thus proving that the channel meets BER <1E-12 under all conditions.

M1000e Architecture

The M1000e is a breakthrough in enterprise server architecture. The enclosure and its components spring from a revolutionary, ground-up design that incorporates the latest advances in power, cooling, I/O, and management technologies. These technologies are packed into a highly-available, rack-dense package that integrates into standard Dell and third-party 19-inch racks.

The M1000e enclosure is 10U high and supports:

Up to 16 server modules

Up to six network and storage I/O interconnect modules

A high-speed, passive midplane that connects the server modules in the front and power, I/O, and management infrastructure in the rear of the enclosure

Comprehensive I/O options that support links of 40 Gbps (with 4x QDR Infiniband), providing high ‐speed server module connectivity to the network and storage both now and well into the future

Thorough power management capabilities that deliver shared power to ensure that the power supplies available to all server modules maintain full capacity

Page 2

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Robust management capabilities that include private Ethernet, serial, USB, and low-level management connectivity between the Chassis Management Controller (CMC), Keyboard/Video/

Mouse (iKVM) switch, and server modules

Up to two Chassis Management Controllers (one is standard; the second provides optional redundancy) and one optional integrated iKVM switch

Up to six hot-pluggable, redundant power supplies and nine hot-pluggable, N+1 redundant fan modules

A system front control panel with an LCD panel, two USB Keyboard/Mouse connections, and one video ―crash cart‖ connection

Figure 1

M1000e Front View

Midplane Fabric Connections

M1000e blades support three redundant I/O fabrics:

Fabric A consists of dual-integrated Ethernet controllers connected directly from the blade to I/O modules A1 and A2 in the rear of the enclosure. Fabric A is always an Ethernet fabric.

Both Fabric B and Fabric C are supported through optional mezzanine cards on separate x8 PCI

Express lanes. Fabrics B and C can support two ports, with each port having four lanes (one lane consists of both transmit and receive differential signals) connected from the mezzanine connector to the I/O module as shown in Figure 2 and Figure 3.

Page 3

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Fabric B and Fabric C I/O modules receive 16 sets of signals

—one set from each blade. Fabric

TX and RX differential pairs are the high speed routing lanes, supporting 1.25 Gbps to 10.3125

Gbps. Fabrics internally support a BER of 10^-12 or better.

Blade

(1 of 16)

Midplane

Fabric B 4 lane TX & RX differential pairs (16 lines)*

Fabric B

IO

Module

1

Fabric B

Mezzanine

Fabric B 4 lane TX & RX differential pairs (16 lines)*

Fabric B

IO

Module

2

Fabric C

Mezzanine

Fabric C 4 lane TX & RX differential pairs (16 lines)*

Fabric C 4 lane TX & RX differential pairs (16 lines)*

Fabric C

IO

Module

1

Fabric C

IO

Module

2

Figure 2

Fabric B and C Midplane Connections

I/O Communication Paths in the M1000e

As shown in Figure 3, Port 1 of the mezzanine card in Fabric B will communicate with a switch or passthrough inserted in the B1 I/O slot in the rear of the chassis. Port 2 of the mezzanine card in Fabric B will communicate with the switch or pass-through in the B2 I/O slot. The mezzanine card in Fabric C follows a similar schema.

Page 4

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

M1000e Midplane

1-2 lane

CPU

Half Height Modular Server (1 of 16)

4-8 lane

PCIe

Fabric A

LOM

8 lane

PCIe

MCH/

IOH

Fabric B

Mezzanine

CPU

8 lane

PCIe

Fabric C

Mezzanine

1-2 lane

Fabric A1

Ethernet

I/O Module

Fabric A2

Ethernet

I/O Module

1-4 lane

1-4 lane

1-2 lane

1-2 lane

1-4 lane

1-4 lane

Fabric B1

I/O Module

External

Fabric

Connections

Fabric B2

I/O Module

CPU

Half Height Modular Server (16 of 16)

4-8 lane

PCIe

Fabric A

LOM

8 lane

PCIe

MCH/

IOH

Fabric B

Mezzanine

8 lane

PCIe

CPU

Fabric C

Mezzanine

1-4 lane

1-4 lane

1-4 lane

Fabric C1

I/O Module

Fabric C2

I/O Module

1-4 lane

Figure 3

Fabric A, B, and C Connection to I/O Module

Midplane Design Overview

In the M1000e chassis, Fabrics B and C are designed to support data rates up to and including 10 Gbps per lane. As shown in Figure 3, there are four lanes from each port, and each lane consists of a transmit and receive differential pair that is capable of supporting up to 10 Gbps, which equals 40 Gbps per port.

To achieve this, we paid significant attention to maintaining signal integrity and analyzing each component of the midplane.

Another aspect of the midplane that received heightened focus is the maximum trace length from each blade server to the IOM. We limited the maximum trace length to 15 inches, keeping the total channel length

—including traces on the mezzanine, blade, midplane, and IOM—to around 25 inches. This design is highly flexible and, therefore, advantageous when designing different mezzanines and IOMs to support a variety of protocols and data rates.

Following is a brief overview of the types of analysis we performed and the criteria we used. Note that the main components of the midplane are the connectors and PCB material.

Connectors

To select the connectors for the midplane, Dell designed a comprehensive simulation plan that considered insertion loss, skew, impedance, and crosstalk. Based on results of the simulation data, we selected connectors from three different vendors, and built test boards using these connectors based on

Page 5

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application the topology shown in Figure 4. We tested every board for signal integrity effects, and took frequency domain measurements to ensure the channels met the IEEE 802.3ap specification. Based on the results of these tests, Dell selected the connector with the best performance for the midplane.

PCB material

PCB material has a significant impact on signal integrity and I/O performance. Dell included a variety of design criteria to ensure development of the most optimal solution. Some of the factors we explored were the PCB material’s capability to be used on thick boards, issues with back drilling, weave effects, and loss curves over temperature.

Midplane Analysis

Dell built test midplanes based on results of the connector and PCB material analyses. The test plan included passive measurements in time domain and frequency domain. It also included testing that used

FPGA (Field Programmable Gate Array) circuits and chipsets capable of transmitting and receiving data from 4 Gbps to 10 Gbps. The objective of the frequency domain test was to determine whether the channels from the blade’s I/O mezzanine card (Test Point 1) to the IOM (Test Point 2) met the IEEE

802.3ap KR specification for insertion loss, crosstalk limit, and insertion to crosstalk deviation limit.

The signal path from the mezzanine card to the IOM card is shown in Figure 4.

System Chassis

Connectors

Mezzanine

TX

RX

Blade

Midplane

IOM

RX

TX

Test Point 1 Test Point 2

Figure 4

Channel between Mezzanine and IOM

The transmit channel from the mezzanine side has coupling capacitors at the receiving end of the IOM to provide DC (direct current) isolation; conversely, the transmit channel from the IOM has coupling capacitors at the receiving end of mezzanine

Page 6

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Dell designed the mezzanine and IOM test cards with microwave connectors to test each channel routed from mezzanine to blade to midplane to IOM. We performed detailed design analysis to minimize the launch discontinuity caused by assembling these microwave connectors on a printed circuit board.

IEEE 802.3ap Requirements for Backplane Channel

IEEE 802.3ap defines parameters around channel insertion loss and crosstalk for the backplane environment; if the backplane meets these parameters, then the channel will operate at BER <1E-12 with a 10GBASE-KR compliant PHY. Insertion loss is the differential response measured from the blade’s I/O mezzanine card (Test Point 1) to the IOM (Test Point 2). Ideally, the Insertion Loss (IL) magnitude should be within the high-confidence region as shown in Figure 5.

Figure 5

Insertion Loss Limit for 10BASE-KR

Insertion loss deviation (ILD) is the difference between insertion loss and fitted attenuation. Fitted

attenuation is the least mean squares line fit to the insertion loss and computed over the frequency range f1 to f2, where f1 and f2 for KR are, respectively, 1 GHz and 6 GHz. ILD should be within the highconfidence region as shown in Figure 6.

Page 7

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 6

Insertion Loss Deviation Limits

Insertion loss to crosstalk ratio (ICR) is the ratio of insertion loss measured from TP1 to TP4 to the total crosstalk measured at TP4.

Page 8

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 7

Insertion Loss to Crosstalk Ratio Limit

Measured M1000e Channels

Dell measured all channels on the M1000e midplane using a vector network analyzer (VNA), and then plotted channel performance against the IEEE 802.3ap specification. Figure 8 shows the performance of some channels against the specification.

Slot 1 – IOM B1

The Slot 1-IOM B1 channel is the longest from the mezzanine to the IOM; it is about 23.2 inches, and it includes three connectors in the path. The contribution from SMA (SubMiniature version A) connectors is included in the channel characteristics.

Page 9

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 8

Insertion Loss Deviation Limit for Slot 1-IOM B1

The insertion loss deviation graph in Figure 8 shows that the channel is in the high-confidence region between ILD_max_green and ILD_min_green, meaning the channel can run error free with BER <1E-12.

The curve does not have any significant peaks, so a 3-tap or 5-tap DFE is able to recover the signal at 10

Gbps with a bit error rate of <1E-12.

Page 10

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 9

Insertion Loss Minimum Limit for Slot 1-IOM B1

As shown in Figure 9, the Slot 1-B1 insertion loss curve meets the IEEE 802.3ap specification, leaving significant margin above the minimum insertion loss requirement. There are no significant ripples, and the curve has a smooth roll-off, which is important when chipsets use fixed-gain equalizers to boost the signal at particular data rates.

Page 11

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 10

Total Crosstalk Limit for Slot 1-IOM B1

The Total Crosstalk Limit graph in Figure 10 shows that power sum crosstalk (PSXT)

—which is a power sum of the individual near end (NEXT) and far end (FEXT) aggressors

—is well below the specification.

ICR Limit

Page 12

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

ICR Limit

ICR Limit

Figure 11

ICR Limit for Slot 1-IOM B1

Insertion to crosstalk ratio (ICR) is the ratio of insertion loss to total crosstalk measured. Because ICR is in the high-confidence region, the effect of aggressor signal amplitude and transition time on the victim signal is low. Figure 11 shows that the curve is well above the specification limit.

Summary

The graphs for Slot 1-B1 show that the channel meets the IEEE 802.3ap specification, indicating that the channel can meet bit error rate <1E-12. The crosstalk graphs show that Slot1-B1 is unaffected by crosstalk at both near end and far end, even if the aggressor signal has significant amplitude.

Page 13

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Slot 14–IOM B1

At approximately 15.9 inches in length, Slot 14

–IOM B1 is a medium-sized channel.

Figure 12

Insertion Loss Deviation Limit for Slot 14-IOM B1

The insertion loss deviation graphed in Figure 12 shows the channel in the high-confidence region. There are no significant peaks in the ripple that cannot be equalized using a DFE.

Page 14

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 13

Insertion Loss Minimum Limit for Slot 14-IOM B1

The Slot 14-B1 insertion loss curve depicted in Figure 13 passes the specification limit defined by the red curve, showing that the channel can be equalized at 10 Gbps using a DFE, since the dilectric loss is well within the specified limits.

Page 15

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 14

Total Crosstalk Limit for Slot 14-IOM B1

ICR Limit

ICR Limit

Figure 15

ICR Limit for Slot 14-IOM B1

Page 16

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Summary

Slot14-B1 data shows that the channel meets the informative interconnect characteristics defined by the

IEEE 802.3ap specification. The measurements also show that crosstalk from aggressors

—both in the same direction as the victim signal and in the opposite direction of the victim signal

—is relatively low.

Hence, contribution to deterministic jitter from crosstalk is low at the receiver end.

Slot 11-IOM B1

The Slot 11-IOM B1 channel is 21.2 inches including the mezzanine card, blade, midplane, and IOM.

Figure 16

Insertion Loss Minimum Limit for Slot 11-IOM B1

The Slot 11-B1 insertion loss curve performs the same as the channels shown above.

Page 17

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 17

Insertion Loss Deviation Limit for Slot 11-IOM B1

Figure 18

Total Crosstalk Limit for Slot 11-IOM B1

Page 18

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

ICR Limit

ICR Limit

Figure 19

ICR Limit for Slot 11-IOM B1

Summary

Measurement data for Slot 11-B1 shows that insertion loss curves and crosstalk data pass the IEEE

802.3ap specification. The channel performance in terms of BER will be similar to the channels mentioned above (<1E-12).

Page 19

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Slot11- IOM C2

The Slot 11-IOM C2 channel is about 13.9 inches including the mezzanine, blade, midplane, and IOM card.

Figure 20

Insertion Loss Minimum Limit for Slot 11-IOM C2

Page 20

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 21

Insertion Loss Deviation Limit for Slot 11-IOM C2

Summary of M1000e channels

This white paper provides a synopsis of all channels that support 10 Gbps per lane or 40 Gbps per port and how they perform according to the interconnect characteristics defined by the IEEE 802.3ap specification. Note that IEEE 802.3ap informative interconnect channel characteristics are only defined for traces up to 1m using two connectors, while the channels in M1000e use three connectors.

As shown by the measurement data above, all channels in the M1000e meet the IEEE 802.3ap specification, and

—as will be shown later in this white paper—the BER is well below 1E-12 at 10 Gbps per lane.

Overview of the AMCC QT2025 PHY

The QT2025 is an AMCC 1.25/10.3125 Gbps (1G/10G) serial-to-XAUI physical layer IC (PHY) with integrated Electronic Dispersion Compensation (EDC). It is ideally suited to high-density blade systems based on the IEEE 802.3ap 10GBASE-KR standard.

Page 21

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

The QT2025 includes a sophisticated backplane receiver with multiple levels of sampling point adjustment, equalization, and EDC circuitry. An on-chip microprocessor optimizes the receiver settings at start-up, and as the operating conditions

—including temperature, supply, and crosstalk—change. The backplane transmitter includes a 3-tap FIR driver with performance that exceeds the IEEE 802.3ap jitter requirements.

The QT2025 employs IEEE 802.3ap Clause 73 Auto Negotiation to select the fastest supported data rate for both ends of the backplane link. Automatic Link Training enables the receiver to tune the far end transmit equalizer for optimized bit error rate (BER) performance. On-chip 10GBASE-KR compliant

Forward Error Correction (FEC) can be used to improve the link BER by several orders of magnitude.

XAUI

XAUI

Rx

XGXS

10/8

PCS

64/66

FEC

KR

Tx

XAUI or

SGMII

Loopback, PRBS and

Test Features uC

KR Training

Auto Neg

10GBASE-KR or

1000BASE-KX

XAUI

XAUI

Tx

XGXS

8/10

PCS

66/64

FEC

EDC

KR

Rx

Figure 22

QT2025 Block Diagram

Performance Metrics for AMCC QT2025

Based on the results detailed above, all the channels in the M1000e midplane meet the IEEE 802.3ap specification. Meeting the channel specification ensures that the channels will have a BER less than 1E-

12 when using a PHY that meets the IEEE 802.3ap specification.

The most important metric for the backplane receiver is interference tolerance. IEEE 802.3ap defines two representative channels

—high loss and medium loss for this test (refer to Figure 23 and Figure 24 for equivalent AMCC test channels). The receiver has to tolerate 5.2 mV of RMS noise over Channel 1 and

12.1 mV of RMS noise over Channel 2. The RMS noise is injected directly at the host receiver. The test is run with a far end transmitter output that includes maximum allowed jitter and lowest eye opening for worst case. The goal of the interference tolerance test is to ensure that the receiver will seamlessly operate over a wide variety of backplane traces (refer to Figure 25 for a sample trace) and in a noisy system.

Page 22

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

As demonstrated in the section entitled

―Receiver Interference Tolerance Test,‖ the PowerEdge M1000e midplane design can tolerate a higher amount of RMS noise

—almost 17mV when using PRBS 31 pattern

—and achieve BER <1E-12.

Figures 23, 24, and 25 show channel characteristics of a non-Dell backplane that does not meet IEEE

802.3ap specification.

Figure 23

IEEE 802.3ap Channel 1 (high loss)

Figure 24

IEEE 802.3ap Channel 2 (low loss)

Page 23

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 25

Sample (not compliant) Backplane Channel

The main parameters of interest for the transmitter include jitter, eye opening, and tight control of the driver pre-cursor and post-cursor peaking (effect of peaking shown in Figure 26; note different scale used for diagrams).

Figure 26

QT2025 Output Waveform At Transmitter and After 27” of PCB

The QT2025 transmitter jitter performance eases the stress presented at the receiver and relaxes requirements on the channel design. This is demonstrated in Figure 27, which shows the improved receive interference tolerance with the far end traffic source of the QT2025 transmitter versus the IEEE

802.3ap specified worst-case transmitter.

Page 24

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 27

Effect of 10GBASE-KR Host Transmitter Jitter to Receiver

Interference Tolerance

QT2025 Performance over M1000e Midplane

The QT2025 10GBASE-KR performance was thoroughly tested on the M1000e midplane. The experiments were run over selected M1000e midplane traces and included:

Bring-up of the link using Auto Negotiation and KR training

Bit error rate (BER) tests with added crosstalk

Receiver interference tolerance tests

Repeatability/robustness experiments with BER tests run for several weeks

The results of BER, interference tolerance, and robustness tests are given below.

BER Test

The setup for the BER tests is shown on Figure 28. The trace under test (DUT) was Slot11-IOMB1. The longest adjacent channel (aggressor) over the midplane was selected for the crosstalk (NEXT) source.

Page 25

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Midplane

IOM

Blade 1

Mezz

Card 1

Mezz

Card 2

Blade 2

Termination

Figure 28

Setup for BER Test

The QT2025 built-in PRBS31 generator was used for the DUT pattern, while the aggressor traffic was varied between the following for different test runs:

(1) JPAT1 (Pattern 1 in Clause 52.9.1 of IEEE 802.3-2005)

(2) JPAT2 (Pattern 2 in Clause 52.9.1 of IEEE 802.3-2005)

(3) PRBS9 (pattern generated using polynomial 1+x

5

(4) PRBS31 (pattern generated using polynomial 1+x

+x

28

9

)

+x

31

)

The aggressor output was set to 3.0 V peak-peak differential (Vppd), which exceeds the IEEE 802.3ap maximum transmitter output of 1.2 Vppd by almost three times. The effect of the different aggressor patterns, and termination at far end, was studied.

Each BER test was run for a total of 800 seconds, and the resulting number of errors at the receiver was recorded.

Table 1 BER Test Results

Aggressor termination Aggressor pattern Errors counted Test Time (sec)

Not terminated JPAT1 0 800

Terminated

Not terminated

Terminated

Not terminated

Terminated

Not terminated

JPAT1

JPAT2

JPAT2

PRBS9

PRBS9

PRBS31

0

0

0

0

0

0

800

800

800

800

800

800

Page 26

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Terminated PRBS31 0 800

Receiver Interference Tolerance Test

Figure 29 shows the setup for the interference tolerance test. The DUT traces were Slot11-IOMB1

(longest), Slot14-IOMB1 and Slot11-IOMC2 (shortest).

A noise source with a variable attenuator was coupled directly into the DUT receiver through a directional coupler as shown in Figure 29.

Noise

Source

Blade 1

Mezz

Card 1

Mezz

Card 2

Blade 2

Midplane

IOM

Termination

Figure 29

Setup for Interference Tolerance Test

Built-in QT2025 pattern generators and checkers were used to determine performance. PRBS31 was used for the DUT, while the aggressor pattern was switched between the following patterns for different test runs:

(1) JPAT1

(2) JPAT2

(3) PRBS31

The aggressor output was set to 3.0 Vppd. Tests were run with the aggressor output both left open and terminated at the far end.

Page 27

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

The RMS noise was swept at 800 second steps, and the resulting BER was recorded at each step.

The results for Slot11-IOMB1 are shown in Figure 30. As shown, the impact of the aggressor termination and pattern is negligible.

Figure 30

Interference Tolerance Test Results over Slot11-IOMB1

The comparative results for both Slot11-IOMB1 and Slot11-IOMC2 are shown in Figure 31.

Page 28

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

Figure 31

Interference Tolerance Results over Slot11-IOMB1 and Slot11-

IOMC2

Repeatability Tests

The setup shown in Figure 28 was used for the repeatability tests. The DUT traces were Slot11-IOMB1

(longest) and Slot14-IOMB1. The aggressor output was 3.0 Vppd, and it was not terminated at the far end. The aggressor traffic was 66b/64b encoded traffic.

To determine repeatability and robustness, 800 second BER tests were continuously run five hundred times. After each run, the trained transmitter values, optimized receiver settings, and errors were recorded, and the QT2025 was reset. Each of the five hundred runs had consistent transmit and receive settings and no bit errors.

Conclusions

The M1000e system is designed to support any protocol over its midplane, with data rates ranging from

1.25 Gbps to 10 Gbps. The benefits of this revolutionary design are shown in this white paper by highlighting some of the channel performance based on IEEE 802.3ap specification and also by running interference tolerance tests using AMCC QT2025.

The M1000e Backplane System combined with AMCC QT2025 provides a high-performance, robust solution for high-density 10GBASE-KR applications.

Page 29

Using a Dell™ PowerEdge™ M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application

References

IEEE 802.3ap specification- Ethernet operation over Electrical backplanes; part of IEEE 802.3-

2005 specification. http://standards.ieee.org/

Page 30

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

advertisement