INFINIBAND
ADAPTER CARDS
PRODUCT BRIEF
Connect-IB™
Single/Dual-Port InfiniBand Host Channel Adapter Cards
Connect-IB adapter cards provide the highest performing and most
scalable interconnect solution for server and storage systems.
High-Performance Computing, Web 2.0, Cloud, Big Data, Financial
Services, Virtualized Data Centers and Storage applications will achieve
significant performance improvements resulting in reduced completion
time and lower cost per operation.
World Class Performance
Connect-IB delivers leading performance with
maximum bandwidth, low latency, and computing
efficiency for performance-driven server and
storage applications. Maximum bandwidth is
delivered across PCI Express 3.0 x16 and two
ports of FDR InfiniBand, supplying more than
100Gb/s of throughput together with consistent
low latency across all CPU cores. Connect-IB also
enables PCI Express 2.0 x16 systems to take full
advantage of FDR, delivering at least twice the
bandwidth of existing PCIe 2.0 solutions.
Connect-IB offloads the CPU protocol processing
and the data movement from the CPU to the
interconnect, maximizing the CPU efficiency and
accelerate parallel and data-intensive application
performance. Connect-IB supports new data
operations including noncontinuous memory
transfers which eliminate unnecessary data
copy operations and CPU overhead. Additional
application acceleration is achieved with a 4X
improvement in message rate compared with
previous generations of InfiniBand cards.
Unlimited Scalability
The next level of scalability and performance
requires a new generation of data and application
accelerations. MellanoX Messaging (MXM)
and Fabric Collective Accelerator (FCA) utilizing
CORE-Direct™ technology accelerate MPI and
©2013 Mellanox Technologies. All rights reserved.
PGAS communication performance, taking full
advantage of Connect-IB enhanced capabilities.
Furthermore, Connect-IB introduces an innovative
transport service Dynamic Transport, to ensure
unlimited scalability for clustering of servers and
storage systems.
High Performance Storage
Storage nodes will see improved performance
with the higher bandwidth FDR delivers, and
standard block and file access protocols can
leverage InfiniBand RDMA for even better
performance. Connect-IB also supports hardware
checking of T10 Data Integrity Field / Protection
Information (DIF/PI) and other signature types,
reducing the CPU overhead and accelerating the
data to the application. Signature translation and
handover are also done by the adapter, further
reducing the load on the CPU. Consolidating
compute and storage over FDR InfiniBand with
Connect-IB achieves superior performance while
reducing data center costs and complexities.
Software Support
All Mellanox adapter cards are supported by all
Windows and Linux distributions. Connect-IB
adapters support OpenFabrics-based RDMA
protocols and software, and are compatible with
configuration and management tools from OEMs
and operating system vendors.
TM
HIGHLIGHTS
HIGHLIGHTS
BENEFITS
–– World-class cluster, network, and storage
performance
–– Guaranteed bandwidth and low-latency
services
–– I/O consolidation
–– Virtualization acceleration
–– Power efficient
–– Scales to tens-of-thousands of nodes
KEY FEATURES
–– Greater than 100Gb/s over InfiniBand
–– Greater than 130M messages/sec
–– 1us MPI ping latency
–– PCI Express 3.0 x16
–– CPU offload of transport operations
–– Application offload
–– GPU communication acceleration
–– End-to-end internal data protection
–– End-to-end QoS and congestion control
–– Hardware-based I/O virtualization
–– RoHS-R6
Connect-IB™ Single/Dual-Port InfiniBand Host Channel Adapter Cards
Mellanox Advantage
Mellanox Technologies is a leading supplier of
end-to-end servers and storage interconnect
solutions to optimize high performance
computing performance and efficiency.
Mellanox InfiniBand adapters, switches,
and software are powering the largest
supercomputers in the world. For the best in
server and storage performance and scalability
with the lowest TCO, Mellanox interconnect
products are the solution.
page 2
FEATURES
SUMMARY*
FEATURE SUMMARY*
INFINIBAND
–– IBTA Specification 1.2.1 compliant
–– FDR 56Gb/s InfiniBand
–– Hardware-based congestion control
–– 16 million I/O channels
–– 256 to 4Kbyte MTU, 1Gbyte messages
ENHANCED INFINIBAND
–– Hardware-based reliable transport
–– Extended Reliable Connected transport
–– Dynamically Connected transport service
–– Signature-protected control objects
–– Collective operations offloads
–– GPU communication acceleration
–– Enhanced Atomic operations
STORAGE SUPPORT
–– T10-compliant DIF/PI support
–– Hardware-based data signature handovers
FLEXBOOT™ TECHNOLOGY
–– Remote boot over InfiniBand
HARDWARE-BASED I/O VIRTUALIZATION
–– Single Root IOV**
–– Up to 16 physical functions, 256 virtual
functions
–– Address translation and protection
–– Dedicated adapter resources
–– Multiple queues per virtual machine
–– Enhanced QoS for vNICs and vHCAs
–– VMware NetQueue support
PROTOCOL SUPPORT
–– OpenMPI, IBM PE, Intel MPI, OSU MPI
(MVAPICH/2), Platforms MPI, UPC, Mellanox
SHMEM
–– TCP/UDP, IPoIB, RDS
–– SRP, iSER, NFS RDMA, SMB Direct
–– uDAPL
COMPATIBILITY
FEATURE
SUMMARY*
PCI EXPRESS INTERFACE
–– PCI Express 2.0 or 3.0 compliant
–– Auto-negotiates to x16, x8, x4, or x1
–– Support for MSI-X mechanisms
CONNECTIVITY
–– Interoperable with InfiniBand switches
–– Passive copper cable with ESD protection
–– Powered connectors for optical and active
cable support
Ordering Part Number
MCB191A-FCAT
OPERATING SYSTEMS/DISTRIBUTIONS
–– Novell SLES, Red Hat Enterprise Linux
(RHEL), and other Linux distributions
–– Microsoft Windows Server 2008/CCS 2003,
HPC Server 2008
–– VMware ESX 5.1
–– OpenFabrics Enterprise Distribution (OFED)
–– OpenFabrics Windows Distribution (WinOF)
InfiniBand Ports
PCI Express
Single FDR 56Gb/s
3.0 x8
MCB192A-FCAT
Dual FDR 56Gb/s
3.0 x8
MCB193A-FCAT
Single FDR 56Gb/s
3.0 x16
MCB194A-FCAT
Dual FDR 56Gb/s
3.0 x16
*This product brief describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com
for feature availability or contact your local sales representative.
**Future Support
***Product images may not include heat sync assembly; actual product may differ.
350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085
Tel: 408-970-3400 • Fax: 408-970-3403
www.mellanox.com
© Copyright 2013. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MLNX-OS, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd.
Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, Mellanox Virtual Modular Switch, MetroX, MetroDX, Mellanox Open Ethernet, Open Ethernet, ScalableHPC,
Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
3933PB Rev 1.6