Product Brief Intel Gigabit ET, ET2, and EF Multi-Port Server Adapters Network Connectivity Intel Gigabit ET, ET2, and EF Multi-Port Server Adapters Dual- and quad-port Gigabit Ethernet server adapters designed for multi-core processors and optimized for virtualization High-performing, 10/100/1000 Ethernet connection Reliable and proven Gigabit Ethernet technology from Intel Corporation Scalable PCI Express* interface provides dedicated I/O bandwidth for I/O-intensive networking applications Optimized for virtualized environments Flexibility with iSCSI Boot and choice of dual- and quad-port adapters in both fiber and copper The Intel Gigabit ET, ET2, and EF Multi-Port Server Adapters are Intels third generation of PCIe GbE network adapters. Built with the Intel 82576 Gigabit Ethernet Controller, these new adapters The I/O technologies on a multi-core platform make use of the showcase the next evolution in GbE networking features for multiple queues and multiple interrupt vectors available on the the enterprise network and data center. These features include network controller. These queues and interrupt vectors help in support for multi-core processors and optimization for server load balancing the data and interrupts amongst themselves in virtualization. order to lower the load on the processors and improve overall system performance. For example, depending upon the latency Designed for Multi-Core Processors sensitivity of the data, the low level latency interrupts feature These dual- and quad-port adapters provide high-performing, can bypass the time interval for specific TCP ports or for flagged multi-port Gigabit connectivity in a multi-core platform as well packets to give certain types of data streams the least amount as in a virtualized environment. In a multi-core platform, the of latency to the application. adapters support different technologies such as multiple queues, receive-side scaling, MSI-X, and Low Latency Interrupts, that help in accelerating the data across the platform, thereby improving application response times.This generation of PCIe Intel Gigabit adapters provides improved Optimized for Virtualization performance with the next-generation VMDq technology, which The Intel Gigabit ET, ET2, and EF Multi-Port Server Adapters includes features such as loop back functionality for inter-VM showcase the latest virtualization technology called Intel communication, priority-weighted bandwidth management, and Virtualization Technology for Connectivity (Intel VT for doubling the number of data queues per port from four to eight. Connectivity). Intel VT for Connectivity is a suite of hardware It now also supports multicast and broadcast data on a virtual- assists that improve overall system performance by lowering ized server. the I/O overhead in a virtualized environment. This optimizes CPU usage, reduces system latency, and improves I/O through- Intel I/O Acceleration Technology put. Intel VT for Connectivity includes: Intel I/O Acceleration Technology (Intel I/OAT) is a suite of fea- Virtual Machine Device Queues (VMDq) tures that improves data acceleration across the platform, from 1 networking devices to the chipset and processors, which help Intel I/O Acceleration Technology (Intel I/OAT) to improve system performance and application response times. Use of multi-port adapters in a virtualized environment is very The different features include multiple queues and receive-side important because of the need to provide redundancy and scaling, Direct Cache Access (DCA), MSI-X, Low-Latency Inter- data connectivity for the applications/workloads in the virtual rupts, Receive Side Scaling (RSS), and others. Using multiple machines. Due to slot limitations and the need for redundancy queues and receive-side scaling, a DMA engine moves data using and data connectivity, it is recommended that a virtualized the chipset instead of the CPU. DCA enables the adapter to physical server needs at least six GbE ports to satisfy the pre-fetch data from the memory cache, thereby avoiding cache I/O requirement demands. misses and improving application response times. MSI-X helps in load-balancing I/O interrupts across multiple processor cores, Virtual Machine Device queues (VMDq) and Low Latency Interrupts can provide certain data streams a VMDq reduces I/O overhead created by the hypervisor in a non-modulated path directly to the application. RSS directs the virtualized server by performing data sorting and coalescing in interrupts to a specific processor core based on the applications 2 the network silicon. VMDq technology makes use of multiple address. queues in the network controller. As data packets enter the Single-Root I/O Virtualization (SR-IOV) network adapter, they are sorted, and packets traveling to the same destination (or virtual machine) get grouped together in a For mission-critical applications, where dedicated I/O is required single queue. The packets are then sent to the hypervisor, which for maximum network performance, users can assign a dedicated directs them to their respective virtual machines. Relieving the virtual function port to a VM. The controller provides direct hypervisor of packet filtering and sorting improves overall CPU VM connectivity and data protection across VMs using SR-IOV. usage and throughput levels. SR-IOV technology enables the data to bypass the software virtual switch and provides near-native performance. It assigns either physical or virtual I/O ports to individual VMs directly. 2