InfiniBand Switch System Family

InfiniBand
Switch System Family
Highest Levels of Scalability,
Simplified Network Manageability,
Maximum System Productivity
Mellanox InfiniBand Switch Systems –
the highest performing interconnect solution for
Web 2.0, Big Data, Cloud Computing, Enterprise Data Centers (EDC)
and High-Performance Computing (HPC).
VALUE PROPOSITIONS
■■
Mellanox switches come with port configurations from 8 to 648 at speeds
up to 100Gb/s per port with the ability to build clusters that can scale out
to ten-of-thousands of nodes.
■■
■■
■■
Mellanox Unified Fabric Manager™ (UFM™) ensures optimal cluster and
data center performance with high availability and reliability.
■■
■■
Mellanox switches support Virtual Protocol Interconnect® (VPI) - allowing
them to run seamlessly over both InfiniBand and Ethernet.
Support for InfiniBand Router to enable the highest of scalability and fabric
isolation.
Mellanox switches delivers high bandwidth with low latency to get the
highest server efficiency and application productivity.
Best price/performance solution with error-free 40-100Gb/s link speed.
Mellanox’s family of InfiniBand switches delivers the highest performance
and port density with a complete chassis and fabric management solution to
enable compute clusters and converged data centers to operate at any scale while reducing operational costs
and infrastructure complexity. The Mellanox family of switches includes a broad portfolio of Edge and Director
switches that range from 8 to 648 ports, and support 40-100Gb/s per port with the lowest latency. These
switches allow IT managers to build the most cost-effective and scalable switch fabrics ranging from small
clusters up to tens-of-thousands of nodes.
BENEFITS
■■
■■
■■
Virtual Protocol Interconnect® (VPI)
Virtual Protocol Interconnect (VPI) flexibility enables any standard networking, clustering, storage, and
management protocol to seamlessly operate over any converged network leveraging a consolidated software
stack. VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets
the challenges of a dynamic data center.
Why Software Defined Networking (SDN)?
Data center networks have become exceedingly complex. IT managers cannot optimize the networks for their
applications which leads to high CAPEX/OPEX, low ROI and IT headaches. Mellanox InfiniBand SDN Switches
ensure separation between control and data planes. InfiniBand enables centralized management and view of
the network, programmability of the network by external applications, and enables cost effective, simple and
flat interconnect infrastructure.
■■
■■
■■
Industry-leading energy efficiency,
density, and cost savings
Ultra low latency
Granular QoS for Cluster, LAN and
SAN traffic
Quick and easy setup and management
Maximizes performance by removing
fabric congestions
Fabric Management for cluster and
converged I/O applications
Edge Switches
Sustained Network Performance
8 to 36-port non blocking 40 to 100Gb/s InfiniBand Switch Systems
The Mellanox switch family enables efficient computing for clusters of all sizes
from the very small to the extremely large while offering near-linear scaling in
performance. Advanced features such as static routing, adaptive routing, and
congestion management allows the switch fabric to dynamically detect and avoid
congestion and to re-route around points of congestion. These features ensure the
maximum effective fabric performance under all types of traffic conditions.
The Mellanox family of switch systems provide the highest-performing fabric
solutions in a 1U form factor by delivering up to 7.2Tb/s of non-blocking
bandwidth with the lowest port-to-port latency. These edge switches are an
ideal choice for top-of-rack leaf connectivity or for building small to medium
sized clusters. The edge switches, offered as externally managed or as managed
switches, are designed to build the most efficient switch fabrics through the
use of advanced InfiniBand switching technologies such as Adaptive Routing,
Congestion Control and Quality of Service.
Director Switches
108 to 648-port full bi-directional bandwidth 40 to 100Gb/s InfiniBand Switch
Systems
Mellanox director switches provide the highest density switching solution, scaling
from 8.64Tb/s up to 130Tb/s of bandwidth in a single enclosure, with low-latency
and the highest per port speeds of up to 100Gb/s. Its smart design provides
unprecedented levels of performance and makes it easy to build clusters that can
scale out to thousands-of-nodes.
The InfiniBand director switches deliver director-class availability required for
mission-critical application environments. The leaf, spine blades and management
modules, as well as the power supplies and fan units, are all hot-swappable to
help eliminate down time.
Reduce Complexity
Mellanox switches reduce complexity by providing seamless connectivity between
InifiniBand, Ethernet and Fibre Channel based networks. You no longer need
separate network technologies with multiple network adapters to operate your
data center fabric. Granular QoS and guaranteed bandwidth allocation can be
applied per traffic type. This ensures that each type of traffic has the resources
needed to sustain the highest application performance.
Reduce Environmental Costs
Improved application efficiency along with the need for fewer network adapters
allows you to accomplish the same amount of work with fewer, more costeffective servers. Improved cooling mechanism and reduced power and heat
consumption allow data centers to reduce the cost associated with physical
space.
Enhanced Management Capabilities
Mellanox managed switches comes with an onboard subnet manager, enabling simple, out-of-the-box fabric bring up for up to 2K nodes. Mellanox FabricIT™ (IS5000
family) or MLNX-OS™ (SX6000 and SB7000 families) chassis management provides administrative tools to manage the firmware, power supplies, fans, ports, and
other interfaces.
All Mellanox switches can also be coupled with Mella­nox’s Unified Fabric Manager (UFM) software for managing scale-out InfiniBand computing environments. UFM
enables data center operators to efficiently provision, monitor and operate the modern data center fabric. UFM boosts application performance and ensures that
the fabric is up and running at all times. MLNX-OS provides a license activated embedded diagnostic tool, Fabric Inspector, to check node-to-node, node-to-switch
connectivity and ensures the fabric health.
Chassis Management
UFM™ Software
Edge Switches
Ports
Height
Switching Capacity
Link Speed
Interface Type
Management
PSU Redundancy
Fan Redundancy
Integrated Gateway
IS5022
IS5023
IS5024
IS5025
SX6025
SB7790
8
1U
640Gb/s
40Gb/s
QSFP
No
No
No
–
18
1U
1.44Tb/s
40Gb/s
QSFP
No
No
No
–
36
1U
2.88Tb/s
40Gb/s
QSFP
No
No
No
–
36
1U
2.88Tb/s
40Gb/s
QSFP
No
Yes
Yes
–
36
1U
4.032Tb/s
56Gb/s
QSFP+
No
Yes
Yes
–
36
1U
7.2Tb/s
100Gb/s
QSFP28
No
Yes
Yes
–
SX6005
SX6012
SX6015
SX6018
IS5030
IS5035
SX6036
SB7700
12
1U
1.3 Tb/s
12
1U
1.3 Tb/s
18
1U
2.016 Tb/s
56 Gb/s
56 Gb/s
56 Gb/s
QSFP+
No
–
–
OSFP+
Yes
648
1
QSFP+
No
No
–
18
1U
2.016Tb/s
56Gb/s
QSFP+
Yes
648 nodes
2
36
1U
2.88Tb/s
40Gb/s
QSFP
Yes
108 nodes
1
36
1U
2.88Tb/s
40Gb/s
QSFP
Yes
648 nodes
2
36
1U
4.032Tb/s
56Gb/s
QSFP+
Yes
648 nodes
2
36
1U
7.2Tb/s
100Gb/s
QSFP28
Yes
2048 nodes
2
PSU Redundancy
No
Optional
Yes
Yes
Yes
Yes
Yes
Yes
Fan Redundancy
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Integrated Gateway
–
Optional
–
Optional
–
–
Optional
–
Ports
Height
Switching Capacity
Link Speed
Interface Type
Management
Management Ports
Director Switches
Ports
Height
SX6506
SX6512
CS7520
SX6518
SX6536
CS7500
108
216
216
324
648
648
6U
9U
12U
16U
29U
28U
12.12Tb/s
24.24Tb/s
43.2Tb/s
36.36Tb/s
72.52Tb/s
130Tb/s
Link Speed
56Gb/s
56Gb/s
100Gb/s
56Gb/s
56Gb/s
100Gb/s
Interface Type
QSFP+
QSFP+
QSFP28
QSFP+
QSFP+
QSFP28
Management
648 nodes
648 nodes
2048 nodes
648 nodes
648 nodes
2048 nodes
Management HA
Yes
Yes
Yes
Yes
Yes
Yes
Console Cables
Yes
Yes
Yes
Yes
Yes
Yes
Spine Modules
3
6
6
9
18
18
Leaf Modules (Max)
6
12
6
18
36
18
PSU Redundancy
YES (N+N)
YES (N+N)
YES (N+N)
YES (N+N)
YES (N+N)
YES (N+N)
Fan Redundancy
Yes
Yes
Yes
Yes
Yes
Yes
Switching Capacity
COMPLIANCE
FEATURE SUMMARY
HARDWARE
–– 40-100Gb/s per port
–– Full bisectional bandwidth to all ports
–– IBTA 1.21 and 1.3 compliant
–– QSFP connectors supporting passive and
active cables
–– Redundant auto-sensing 110/220VAC
power supplies
–– Per port status LED Link, Activity
–– System, Fans and PS status LEDs
–– Hot-swappable replaceable fan trays
MANAGEMENT
–– Mellanox Operating System (MLNX-OS/
FabricIT)
•
•
•
•
Switch chassis management
Embedded Subnet Manager (648 nodes)
Error, event and status notifications
Quality of Service based on traffic type and
service levels
–– Coupled with Mellanox Unified Fabric
Manager (UFM)
•
•
•
Comprehensive fabric management
Secure, remote configuration and
management
Performance/provisioning manager
–– Fabric Inspector
•
Cluster diagnostics tools for single node,
peer-to-peer and network verification
SAFETY
–– USA/Canada: cTUVus
–– EU: IEC60950
–– International: CB Scheme
–– Russia: GOST-R
–– Argentina: S-mark
EMC (EMISSIONS)
–– USA: FCC, Class A
–– Canada: ICES, Class A
–– EU: EN55022, Class A
–– EU: EN55024, Class A
–– EU: EN61000-3-2, Class A
–– EU: EN61000-3-3, Class A
–– Japan: VCCI, Class A
–– Australia: C-TICK
ENVIRONMENTAL
–– EU: IEC 60068-2-64: Random Vibration
–– EU: IEC 60068-2-29: Shocks, Type I / II
–– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS
–– Operating 0°C to 45°C,
Non Operating -40°C to 70°C
–– Humidity: Operating 5% to 95%
–– Altitude: Operating -60 to 2000m
ACCOUSTIC
–– ISO 7779
–– ETS 300 753
OTHERS
–– RoHS-6 compliant
–– 1-year warranty
350 Oakmead Parkway, Suite 100
Sunnyvale, CA 94085
Tel: 408-970-3400
Fax: 408-970-3403
www.mellanox.com
© Copyright 2015. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, IPtronics, Kotura, MLNX-OS, PhyX, SwitchX, UltraVOA, Virtual Protocol Interconnect and Voltaire are registered trademarks of
Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, MetroX, MetroDX, Mellanox Open Ethernet, Mellanox Virtual Modular Switch, Open
Ethernet, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
3531BR Rev 4.6