Product Brief Intel Ethernet 700 Series Network Adapters Intel Ethernet Network Adapter X722 Dual and quadport 10GbE adapters supporting highly scalable iWARP RDMA for high-throughput, low-latency, and low-CPU data communication. Key Features Overview The Intel Ethernet Network Adapter X722 features iWARP RDMA for iWARP RDMA high data throughput, low-latency workloads and low CPU utilization. The PCI Express (PCIe) v3.0, x8 X722 is ideal for Software Defined Storage solutions, NVMe-over-Fabric solutions and Virtual Machine migration acceleration. Network Virtualization offloads: VxLAN, GENEVE, and NVGRE RDMA is a host-offload, host-bypass technology that enables a low-latency, Intel Ethernet Flow Director for high-throughput direct memory-to-memory data communication between hardware based application applications over a network. traffic steering iWARP extensions to TCP/IP, standardized by the Internet Engineering Data Plane Development Task Force (IETF), eliminate three major sources of networking overhead: Kit (DPDK) optimized for efficient TCP/IP stack process, memory copies, and application context switches. packet processing Based on TCP/IP, iWARP is highly scalable and ideal for Hyper-converged storage solutions. Excellent small packet performance for network The X722 is one of the Intel Ethernet 700 Series Network Adapters. appliances and Network Functions These adapters are the foundation for server connectivity, providing broad Virtualization (NFV) interoperability, critical performance optimizations, and increased agility for Telecommunications, Cloud, and Enterprise IT network solutions. Intelligent offloads to enable high performance on servers with Intel Interoperability - Multiple media types for broad compatibility backed Xeon Processors by extensive testing and validation. I/O virtualization innovations for Optimization - Intelligent offloads and accelerators to unlock network maximum performance in a performance in servers with Intel Xeon processors. virtualized server Agility - Both Kernel and Data Plane Development Kit (DPDK) drivers for scalable packet processing. Intel Ethernet 700 Series delivers networking performance across a wide range of network port speeds through intelligent offloads, sophisticated packet processing, and quality open source drivers. All Intel Ethernet 700 Series Network Adapters Flexible Port Partitioning (FPP) include these feature-rich technologies: FPP leverages the PCI-SIG SR-IOV specification. Virtual controllers can be used by the Linux host directly and/ Flexible and Scalable I/O for Virtualized or assigned to virtual machines. Infrastructures Assign up to 63 Linux host processes or virtual Intel Virtualization Technology (Intel VT), delivers machines per port to virtual functions. outstanding I/O performance in virtualized server Control the partitioning of per-port bandwidth environments. across multiple dedicated network resources, I/O bottlenecks are reduced through intelligent ensuring balanced QoS by giving each assigned offloads, enabling near-native performance and VM virtual controller equal access to the port s scalability. These offloads include Virtual Machine bandwidth. Device Queues (VMDq) and Flexible Port Partitioning using SR-IOV with a common Virtual Function driver Network administrators can also rate limit each of these for networking traffic per Virtual Machine (VM). services to control how much of the pipe is available to Host-based features supported include: each process. VMDQ for Emulated Path: VMDQ, enables a Greater Intelligence and Performance for NFV and hypervisor to represent a single network port as Cloud deployments multiple network ports that can be assigned to the Dynamic Device Personalization (DDP) customizable individual VMs. Traffic handling is offloaded to the packet filtering, along with enhanced Data Plane network controller, delivering the benefits of port Development Kit (DPDK), support advanced packet partitioning with little to no administrative overhead forwarding and highly-efficient packet processing by the IT staff. for both Cloud and Network Functions Virtualization SR-IOV for Direct Assignment: Adapter-based (NFV) workloads. isolation and switching for various virtual station DDP enables workload-specific optimizations, using instances enables optimal CPU usage in virtualized the programmable packet-processing pipeline. environments. Additional protocols can be added to the default set Up to 128 virtual functions (VFs), each VF can to improve packet processing efficiency that results support a unique and separate data path for I/O in higher throughput and reduced latency. New related functions within the PCI Express hierarchy. protocols can be added or modified on-demand and applied at runtime using Software Defined Firmware Use of SR-IOV with a networking device, for or APIs, eliminating the need to reset or reboot the example, allows the bandwidth of a single port server. This not only keeps the server and VMs up, (function) to be partitioned into smaller slices that running, and computing, it also increases can be allocated to specific VMs or guests, via a performance for Virtual Network Functions (VNFs) standard interface. that process network traffic that is not included in the Intel Ethernet Adaptive Virtual Function (Intel default firmware. Download DDP Profiles Ethernet AVF): Customers deploying mass-scale VMs DPDK provides a programming framework for Intel or containers for their network infrastructure now processors and enables faster development of have a common VF driver. This driver eases SR-IOV high-speed data packet networking applications. hardware upgrades or changes, preserves base-mode functionality in hardware and software, and supports Advanced Traffic Steering an advanced set of features in the Intel Ethernet Intel Ethernet Flow Director (Intel Ethernet FD) is an 700 Series. advanced traffic steering capability. Large numbers of ow afl ffinity filters direct receive packets by their flows Enhanced Network Virtualization Overlays (NVO) to queues for classification, load balancing, and Network virtualization has changed the way matching between flows and CPU cores. networking is done in the data center, delivering accelerations across a wide range of tunneling methods. Steering traffic into specific queues can eliminate context switching required within the CPU. As a result, VxLAN, GENEVE, NVGRE, MPLS, and VxLAN-GPE with Intel Ethernet FD significantly increases the number of NSH Ooads: These stffl ateless offloads preserve transactions per second and reduces latency for cloud application performance for overlay networks, and the applications like memcached. network traffic can be distributed across CPU cores, increasing network throughput. 2 2