Product Brief Intel Omni-Path Host Fabric Adapter 100 Series 100 Gbps per port High Performance Computing (HPC) solutions require the highest levels of The Right Fabric for HPC performance, scalability, and availability to power complex application workloads. Benefits Designed specifically for HPC, Intel End-to-end fabric Omni-Path Host Fabric Interface (HFI) optimization adapters, an element of the Intel Scalable, low latency MPI (less Scalable System Framework, use an than 1s end-to-end) advanced on-load design that automatically scales fabric performance High MPI message rates (160 with rising server core counts, making mmps) these adapters ideal for todays Efficient storage increasingly demanding workloads. communications with new 8K and 10K MTUs Multiple Performance Levels Congestion control and QoS (with deterministic latency) Two Intel Omni-Path Host Fabric Adapter models are available to help Low power consumption fabric designers maximize performance Scalable to tens-of-thousands of versus cost for diverse requirements. The nodes PCIe x16 model supports the full Open Fabrics Alliance* (OFA) 100 Gbs line rate. The PCIe x8 model software supports the same 100 Gbps link rate, while the narrower PCIe connection Key Features limits actual data rates to 56 Gbps. 100 Gbps link speed x16 Version (supports full Advanced Quality of Service (QoS) data rate) Intel Omni-Path Host Fabric Interface x8 version (PCIe limited) adapters provide the foundation for MSI-X interrupt handling for high powerful and efficient traffic control. performance on multi-core hosts Data is segmented into 65-bit Flow Control Digits (FLITs), which are assembled into much larger Link Transfer Packets (LTPs) for efficient wire transfer. By managing traffic at the FLIT 562730-03 Intel Omni-Path Host Fabric Adapter 100 Series 2 level, Intel Omni-Path Architecture HFI SPECIFICATIONS Physical Specifications (Intel OPA) edge and director switches Port are able to make extremely granular Bus interface One Intel OP 4X Host Fabric Interface switching decisions to optimize latency, PCI Express* Gen3 x16 or PCI Express* QSFP28 throughput, and resiliency more Gen3 x8 effectively for all traffic types. Device type LED High Reliability and Resilience End point Link status indicator (Green). With their on-load design, Intel Omni- Path Host Fabric Interface adapters Advanced interrupts Software Operating Systems eliminate the need for data path MSI-X Red Hat* Enterprise Linux* firmware and external memory, while maintaining all connection state INTx SUSE* Enterprise Linux* Server information in host memory. This CentOS* reduces the potential for data errors and HFI Specifications and Interfaces Scientific Linux* makes the fabric inherently more ASIC resilient to adapter and fabric failures. Contact your representative for others Single Intel OP HFI ASIC Additional protection against errors and * Other names and brands may be downtime is provided by ECC protection claimed as the property of others Max Data Rate on all internal SRAMs and parity checking on all internal buses. 100 Gbps PCIe x16 56 Gbps PCIe x8 (Effective rate of 56 Investment Protection Gbps determined by PCIe x8 interface Intel OP Link will operate at up to 100 Great care was taken to ease the Gbps) transition from previous-generation fabric solutions to Intel OPA. The proven Open Fabrics Alliance* (OFA) software Virtual Lanes stack just works with the vast majority Configurable from one to eight VLs of existing HPC applications and plus one management VL provides an ideal foundation for future development. The on-load architecture MTU also delivers increasing value over time Configurable MTU size of 2 KB, 4 KB, by allowing fabric performance to scale 8 KB, or 10KB automatically with ongoing advances in Intel Xeon processors and Intel Xeon Interfaces Phi coprocessors. Supports QSFP28 quad small form factor pluggable passive copper, optical transceivers, and active optical cables