Product Brief Intel Ethernet 700 Series Network Adapters Intel Ethernet Server Adapter XL710 for OCP Industry-leading, energy-efficient design for 40/10GbE performance and multi-core processors. Key Features Overview OCP Spec. v2.0, Type 1 As a founding member of OCP, Intel strives to increase the number of open solutions based on OCP specifications. The Intel Ethernet Network Supports 4x10GbE, 1x40GbE or 2x40GbE configurations Adapter XL710 for OCP is part of the Intel Ethernet 700 Series and PCI Express (PCIe) 3.0, x8 offers 40/10GbE port speeds. Exceptional low power adapters Intel Ethernet 700 Series Network Adapters are the foundation for Network Virtualization offloads server connectivity, providing broad interoperability, critical performance including VxLAN, GENEVE, optimizations, and increased agility for Communications, Cloud, and NVGRE, MPLS, and VxLAN-GPE Enterprise IT network solutions. with Network Service Headers (NSH) Interoperability - Multiple speeds and media types for broad compatibility backed by extensive testing and validation. Intel Ethernet Flow Director for hardware based application Optimization - Intelligent offloads and accelerators to unlock network traffic steering performance in servers with Intel Xeon processors. Dynamic Device Personalization (DDP) enables increased packet Agility - Both Kernel and Data Plane Development Kit (DPDK) drivers for processing efficiency for NFV and scalable packet processing. Cloud deployments Data Plane Development Kit Intel Ethernet 700 Series delivers networking performance across a wide (DPDK) optimized for efficient range of network port speeds through intelligent offloads, sophisticated packet processing packet processing, and quality open source drivers. Excellent small packet performance for network appliances and Network Functions Virtualization (NFV) Intelligent offloads to enable high performance on servers with Intel Xeon processors I/O virtualization innovations for maximum performance in a virtualized serverAll Intel Ethernet 700 Series Network Adapters Flexible Port Partitioning (FPP) include these feature-rich technologies: FPP leverages the PCI-SIG SR-IOV specification. Virtual controllers can be used by the Linux host directly and/ Flexible and Scalable I/O for Virtualized or assigned to virtual machines. Infrastructures Assign up to 63 Linux host processes or virtual Intel Virtualization Technology (Intel VT), delivers machines per port to virtual functions. outstanding I/O performance in virtualized server Control the partitioning of per-port bandwidth environments. across multiple dedicated network resources, I/O bottlenecks are reduced through intelligent ensuring balanced QoS by giving each assigned offloads, enabling near-native performance and VM virtual controller equal access to the port s scalability. These offloads include Virtual Machine bandwidth. Device Queues (VMDq) and Flexible Port Partitioning using SR-IOV with a common Virtual Function driver Network administrators can also rate limit each of these for networking traffic per Virtual Machine (VM). services to control how much of the pipe is available to Host-based features supported include: each process. VMDQ for Emulated Path: VMDQ, enables a Greater Intelligence and Performance for NFV and hypervisor to represent a single network port as Cloud deployments multiple network ports that can be assigned to the Dynamic Device Personalization (DDP) customizable individual VMs. Traffic handling is offloaded to the packet filtering, along with enhanced Data Plane network controller, delivering the benefits of port Development Kit (DPDK), support advanced packet partitioning with little to no administrative overhead forwarding and highly-efficient packet processing by the IT staff. for both Cloud and Network Functions Virtualization SR-IOV for Direct Assignment: Adapter-based (NFV) workloads. isolation and switching for various virtual station DDP enables workload-specific optimizations, using instances enables optimal CPU usage in virtualized the programmable packet-processing pipeline. environments. Additional protocols can be added to the default set Up to 128 virtual functions (VFs), each VF can to improve packet processing efficiency that results support a unique and separate data path for I/O in higher throughput and reduced latency. New related functions within the PCI Express hierarchy. protocols can be added or modified on-demand and applied at runtime using Software Defined Firmware Use of SR-IOV with a networking device, for or APIs, eliminating the need to reset or reboot the example, allows the bandwidth of a single port server. This not only keeps the server and VMs up, (function) to be partitioned into smaller slices that running, and computing, it also increases can be allocated to specific VMs or guests, via a performance for Virtual Network Functions (VNFs) standard interface. that process network traffic that is not included in the Intel Ethernet Adaptive Virtual Function (Intel default firmware. Download DDP Profiles Ethernet AVF): Customers deploying mass-scale VMs DPDK provides a programming framework for Intel or containers for their network infrastructure now processors and enables faster development of have a common VF driver. This driver eases SR-IOV high-speed data packet networking applications. hardware upgrades or changes, preserves base-mode functionality in hardware and software, and supports Advanced Traffic Steering an advanced set of features in the Intel Ethernet Intel Ethernet Flow Director (Intel Ethernet FD) is an 700 Series. advanced traffic steering capability. Large numbers of ow afl ffinity filters direct receive packets by their flows Enhanced Network Virtualization Overlays (NVO) to queues for classification, load balancing, and Network virtualization has changed the way matching between flows and CPU cores. networking is done in the data center, delivering accelerations across a wide range of tunneling methods. Steering traffic into specific queues can eliminate context switching required within the CPU. As a result, VxLAN, GENEVE, NVGRE, MPLS, and VxLAN-GPE with Intel Ethernet FD significantly increases the number of NSH Ooads: These stffl ateless offloads preserve transactions per second and reduces latency for cloud application performance for overlay networks, and the applications like memcached. network traffic can be distributed across CPU cores, increasing network throughput. 2 2