INFINIBAND/VPI ADAPTER CARDS PRODUCT BRIEF ConnectX -3 VPI Single/Dual-Port Adapters with Virtual Protocol Interconnect ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallel processing, Send/Receive semantics allowing more processor HIGHLIGHTS transactional services and high-performance power for the application. CORE-Direct brings embedded I/O applications will achieve signifi- the next level of performance improvement by BENEFITS cant performance improvements resulting in offloading application overhead such as data One adapter for InfiniBand, 10/40/56 Gig reduced completion time and lower cost per broadcasting and gathering as well as global Ethernet or Data Center Bridging fabrics operation ConnectX-3 with VPI also simplifies synchronization communication routines. GPU World-class cluster, network, and storage system development by serving multiple fabrics communication acceleration provides additional performance with one hardware design. efficiencies by eliminating unnecessary internal Guaranteed bandwidth and low-latency data copies to significantly reduce application run services Virtual Protocol Interconnect I/O consolidation time. ConnectX-3 advanced acceleration tech- VPI-enabled adapters enable any standard Virtualization acceleration nology enables higher cluster efficiency and large networking, clustering, storage, and manage- scalability to tens of thousands of nodes. Power efficient ment protocol to seamlessly operate over any Scales to tens-of-thousands of nodes converged network leveraging a consolidated RDMA over Converged Ethernet KEY FEATURES ConnectX-3 utilizing IBTA RoCE technology software stack. With auto-sense capability, Virtual Protocol Interconnect delivers similar low-latency and high-performance each ConnectX-3 port can identify and operate 1us MPI ping latency over Ethernet networks. Leveraging Data Center on InfiniBand, Ethernet, or Data Center Bridging Up to 56Gb/s InfiniBand or 40/56 Gigabit (DCB) fabrics. FlexBoot provides additional Bridging capabilities, RoCE provides efficient low Ethernet per port latency RDMA services over Layer 2 Ethernet. flexibility by enabling servers to boot from remote Single- and Dual-Port options available With link-level interoperability in existing InfiniBand or LAN storage targets. ConnectX-3 PCI Express 3.0 (up to 8GT/s) Ethernet infrastructure, Network Administrators with VPI and FlexBoot simplifies I/O system CPU offload of transport operations design and makes it easier for IT managers to can leverage existing data center fabric manage- Application offload ment solutions. deploy infrastructure that meets the challenges of GPU communication acceleration a dynamic data center. Sockets Acceleration Applications utilizing Precision Clock Synchronization World-Class Performance TCP/UDP/IP transport can achieve industryleading End-to-end QoS and congestion control throughput over InfiniBand or 10 or 40GbE. The InfiniBand ConnectX-3 delivers low latency, Hardware-based I/O virtualization hardware-based stateless offload engines in high bandwidth, and computing efficiency for Ethernet encapsulation (EoIB) ConnectX-3 reduce the CPU overhead of IP packet performance-driven server and storage clustering RoHS-R6 transport. Sockets acceleration software further applications. Efficient computing is achieved by increases performance for latency sensitive offloading from the CPU protocol processing and applications. data movement overhead such as RDMA and 2013 Mellanox Technologies. All rights reserved. ConnectX -3 VPI Single/Dual-Port Adapters with Virtual Protocol Interconnect page 2 I/O Virtualization ConnectX-3 SR-IOV FEATURE SUMMARY* technology provides dedicated adapter INFINIBAND Jumbo frame support (9600B) resources and guaranteed isolation and IBTA Specification 1.2.1 compliant HARDWARE-BASED I/O VIRTUALIZATION protection for virtual machines (VM) within the Hardware-based congestion control Single Root IOV server. I/O virtualization with ConnectX-3 gives 16 million I/O channels Address translation and protection data center managers better server utiliza- 256 to 4Kbyte MTU, 1Gbyte messages Dedicated adapter resources tion while reducing cost, power, and cable ENHANCED INFINIBAND Multiple queues per virtual machine complexity. Hardware-based reliable transport Enhanced QoS for vNICs Storage Accelerated A consolidated Collective operations offloads VMware NetQueue support compute and storage network achieves GPU communication acceleration ADDITIONAL CPU OFFLOADS significant cost-performance advantages over Hardware-based reliable multicast RDMA over Converged Ethernet multi-fabric networks. Standard block and Extended Reliable Connected transport TCP/UDP/IP stateless offload file access protocols can leverage InfiniBand Enhanced Atomic operations Intelligent interrupt coalescence RDMA for high-performance storage access. ETHERNET FLEXBOOT TECHNOLOGY Software Support IEEE Std 802.3ae 10 Gigabit Ethernet Remote boot over InfiniBand All Mellanox adapter cards are supported by IEEE Std 802.3ba 40 Gigabit Ethernet Remote boot over Ethernet a Windows, Linux distributions, VMware, IEEE Std 802.3ad Link Aggregation and Remote boot over iSCSI and Citrix XenServer. ConnectX-3 VPI adapters Failover PROTOCOL SUPPORT support OpenFabrics-based RDMA protocols IEEE Std 802.3az Energy Efficient Ethernet Open MPI, OSU MVAPICH, Intel MPI, MS and software and are compatible with configu- IEEE Std 802.1Q, .1p VLAN tags and priority MPI, Platform MPI ration and management tools from OEMs and IEEE Std 802.1Qau Congestion Notification TCP/UDP, EoIB, IPoIB, RDS operating system vendors. IEEE P802.1Qaz D0.2 ETS SRP, iSER, NFS RDMA IEEE P802.1Qbb D1.0 Priority-based Flow uDAPL Control COMPATIBILITY PCI EXPRESS INTERFACE cable support PCIe Base 3.0 compliant, 1.1 and 2.0 QSFP to SFP+ connectivity through QSA compatible module 2.5, 5.0, or 8.0GT/s link rate x8 OPERATING SYSTEMS/DISTRIBUTIONS Auto-negotiates to x8, x4, x2, or x1 Citrix XenServer 6.1 Support for MSI/MSI-X mechanisms Novell SLES, Red Hat Enterprise Linux (RHEL), and other Linux distributions CONNECTIVITY Microsoft Windows Server 2008/2012 Interoperable with InfiniBand or 10/40 Ethernet switches. Interoperable with 56GbE OpenFabrics Enterprise Distribution (OFED) Mellanox Switches. Ubuntu 12.04 Passive copper cable with ESD protection VMware ESXi 4.x and 5.x Powered connectors for optical and active Ordering Part Number VPI Ports Dimensions w/o Brackets MCX353A-QCBT Single QDR 40Gb/s or 10GbE 14.2cm x 5.2cm MCX354A-QCBT Dual QDR 40Gb/s or 10GbE 14.2cm x 6.9cm MCX353A-FCBT Single FDR 56Gb/s or 40/56GbE 14.2cm x 5.2cm MCX354A-FCBT Dual FDR 56Gb/s or 40/56GbE 14.2cm x 6.9cm *This product brief describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com for feature availability or contact your local sales representative. **Product images may not include heat sync assembly actual product may differ. 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com Copyright 2013. Mellanox Technologies. All rights reserved. Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MLNX-OS, PhyX, SwitchX, V irtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, Mellanox Virtual Modular Switch, MetroX, MetroDX, Mellanox Open Ethernet, Open Ethernet, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox T echnologies, Ltd. All other trademarks are property of their respective owners. 3546PB Rev 1.4