ADAPTER CARD PRODUCT BRIEF ConnectX -4 EN Card 4 100Gb/s Ethernet Adapter Card Single/Dual-Port 100-Gigabit Ethernet Adapter Cards HIGHLIGHTS NEW FEATURES Mellanox ConnectX -4 EN network controller cards with 100Gb/s Ethernet connectivity provides a high 100Gb/s Ethernet per port performance and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms. 1/10/25/40/50/56/100 Gb/s speeds With the exponential growth of data being shared and stored by applications and social networks, Single and dual-port options available the need for high-speed and high performance compute and storage data centers is skyrocketing. ConnectX-4 EN provides high performance for demanding data centers, public and private clouds, Web T10-DIF Signature Handover 2.0 and Big Data applications, and storage systems, enabling todays corporations to meet the demands CPU offloading of transport operations of the data explosion. Application offloading ConnectX-4 EN provides an unmatched combination of 100Gb/s bandwidth in a single port, low latency, Mellanox PeerDirect communication and specific hardware offloads, addressing both todays and the next generations compute and storage acceleration data center demands. Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic I/O Virtualization End-to-end QoS and congestion control ConnectX-4 EN SR-IOV technology provides dedicated adapter resources and guaranteed isolation and Hardware-based I/O virtualization protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 EN gives data RoHS compliant center administrators better server utilization while reducing cost, power, and cable complexity, allowing more Virtual Machines and more tenants on the same hardware. ODCC compatible BENEFITS Overlay Networks High performance silicon for applications requiring high In order to better scale their networks, data center operators often create overlay networks that carry bandwidth, low latency and high traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE. message rate While this solves network scalability issues, it hides the TCP packet from the hardware offloading World-class cluster, network, and engines, placing higher loads on the host CPU. ConnectX-4 effectively addresses this by providing storage performance advanced NVGRE and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated Smart interconnect for x86, Power, Arm, and GPU-based compute and traffic. With ConnectX-4, data center operators can achieve native performance in the new network storage platforms architecture. Cutting-edge performance in virtualized overlay networks NVGRE RDMA over Converged Ethernet (RoCE) and GENEVE ConnectX-4 EN supports RoCE specifications delivering low-latency and high-performance over Ethernet Efficient I/O consolidation, lowering networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 EN advanced data center costs and complexity congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Virtualization acceleration Layer 2 and Layer 3 networks. Power efficiency Scalability to tens-of-thousands of nodes 2020 Mellanox Technologies. All rights reserved. For illustration only. Actual products may vary.Mellanox ConnectX-4 EN Adapter Card page 2 Mellanox PeerDirect Host Management Mellanox PeerDirect communication provides high efficiency RDMA Mellanox host management and control capabilities include NC-SI over access by eliminating unnecessary internal data copies between MCTP over SMBus, and MCTP over PCIe - Baseboard Management components on the PCIe bus (for example, from GPU to CPU), and Controller (BMC) interface, as well as PLDM for Monitor and Control therefore significantly reduces application run time. ConnectX-4 DSP0248 and PLDM for Firmware Update DSP0267. advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes. Software Support All Mellanox adapter cards are supported by Windows, Linux Storage Acceleration distributions, VMware, FreeBSD, and Citrix XENServer. ConnectX-4 EN Storage applications will see improved performance with the high adapters support OpenFabrics-based RDMA protocols and software bandwidth that ConnectX-4 EN delivers. Moreover, standard block and are compatible with configuration and management tools from and file access protocols can leverage RoCE for high-performance OEMs and operating system vendors. storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Signature Handover ConnectX-4 EN supports hardware checking of T10 Data Integrity Field/Protection Information (T10-DIF/PI), reducing the CPU overhead and accelerating delivery of data to the application. Signature handover is handled by the adapter on ingress and/or egress packets, reducing the load on the CPU at the initiator and/or target machines. Compatibility PCI Express Interface Operating Systems/Distributions* Connectivity PCIe Gen 3.0 compliant, 2.0 and 1.1 compatible RHEL/CentOS Interoperable with 1/10/25/40/50/100Gb Ethernet switches 2.5, 5.0, or 8.0 GT/s link rate x16 Windows Passive copper cable with ESD protection Auto-negotiates to x16, x8, x4, x2, or x1 FreeBSD Support for MSI/MSI-X mechanisms VMware Powered connectors for optical and active cable support OpenFabrics Enterprise Distribution (OFED) OpenFabrics Windows Distribution (WinOF) 2020 Mellanox Technologies. All rights reserved.