Our website is optimized for google chrome, please use google chrome to browse it.
Product is eligible for FREE Shipping Nationwide with orders over $200 - T&C: https://www.mwave.com.au/catalog/mwave-promos
Compatibility - Supported Servers:
NOTE: This is a list of supported servers. Some may be discontinued.
At a Glance Features
Industry-leading throughput and latency performance
Operates at 1 Gbps/10 Gbps, auto-negotiation, on four ports
10GBASE-T connectivity supporting up to 100 meters with CAT 6A cabling
Up to 80 Gb/s bi-directional near line rate throughput
Hardware acceleration TCP/IP/UDP stateless offloads, as well as for TCP Offload Engine (TOE)
Superior small packet performance
Active Health Systems support
PXE, Jumbo Frames, Checksum & Segmentation Offload, IPv6 and RSS
On chip temperature monitor (Sea of Sensors)
Support for Preboot Execution Environment (PXE)
* Storage personality must be disabled on NIC intended for DPDK workload. DPDK and Storage modes
cannot be used concurrently on current generation CNA NICs. HPE Recommends using 2 separate NICS for
Storage (Control Plane), and DPDK (Data Plane) workloads for the optimal high availability configuration
This adapter delivers 20 Gb/s bi-directional Ethernet transfer rate per port (80 Gb/s per adpater), providing
the network performance needed to improve response times and alleviate bottlenecks.
IEEE 802.1Q virtual local area network (VLAN) protocol allows each physical port of this adapter to be
separated into multiple virtual NICs for added network segmentation and enhanced security and performance.
VLANs increase security by isolating traffic between users. Limiting the broadcast traffic to within the same
VLAN domain also improves performance.
Checksum & Segmentation Offload
Normally the TCP Checksum is computed by the protocol stack. Segmentation Offload is technique for
increasing outbound throughput of high-bandwidth network connections by reducing CPU overhead. The
technique is also called TCP segmentation offload (TSO) when applied to TCP, or generic segmentation offload
Converged Network Utility (CNU)
This adapter supports Converged Network Utility (CNU) a manageability application to configure converged
network adapters (CNAs) and Ethernet adapters on HPE servers. This host based utility supports for both
GUI and Command Line Interface (scriptable), and can be used to configure Ethernet, FCoE, iSCSI and NPAR
related features/functionality on multiple OS platforms including Windows and Linux. CNU is able to configure
multiple HPE adapters from various network controllers at the same time. Users can benefit easier setup steps,
shorter re-boot time, and one-stop solution for multiple adapters via CNU.
This adapter ships with a suite of operating system-tailored configuration utilities that allow the user to enable
initial diagnostics and configure adapter teaming. This includes a patented teaming GUI for Microsoft Windows
Receive Side Scaling (RSS)
RSS resolves the single-processor bottleneck by allowing the receive side network load from a network adapter
to be shared across multiple processors. RSS enables packet receive-processing to scale with the number of
This adapter is a validated, tested, and qualified solution that is optimized for HPE ProLiant servers. Hewlett
Packard Enterprise validates a wide variety of major operating systems drivers with the full suite of web-based
enterprise management utilities including HPE Intelligent Provisioning and HPE Systems Insight Manager that
simplify network management.
This approach provides a more robust and reliable networking solution than offerings from other vendors and
provides users with a single point of contact for both their servers and their network adapters.
For overall improved system response, this adapter supports standard TCP/IP offloading techniques including:
TCP/IP, UDP checksum offload (TCO) moves the TCP and IP checksum offloading from the CPU to the network
adapter. Large send offload (LSO) or TCP segmentation offload (TSO) allows the TCP segmentation to be
handled by the adapter rather than the CPU.
Precision Time Protocol (IEEE 1588 PTP)
Synchronization of system clocks throughout a network, achieving clock accuracy in the sub-microsecond
range, making it suitable for measurement and control systems.
TCP/IP Offload Engine (TOE) shifts the processing of data in the TCP protocol stack from the server CPU to
the adapter's processor, freeing server CPU cycles for other operations.
Minimize the impact of overlay networking on host performance with tunnel offload support for VXLAN
and NVGRE. By offloading packet processing to adapters, customers can use overlay networking to increase
VM migration flexibility and virtualized overlay networks with minimal impact to performance. HPE Tunnel
Offloading increases I/O throughput, reduces CPU utilization, and lowers power consumption. Tunnel Offload
supports VMware's VXLAN and Microsoft's NVGRE solutions.
VMware NewQueue and Microsoft Virtual Machine Queue (VMQ)
VMware NetQueue is technology that significantly improves performance of 10 Gigabit Ethernet network
adapters in virtualized environments.
Windows Hyper-V VMQ (VMQ) is a feature available on servers running Windows Server 2008 R2 with VMQenabled
Ethernet adapters. VMQ uses hardware packet filtering to deliver packet data from an external virtual
machine network directly to virtual machines, which reduces the overhead of routing packets and copying
them from the management operating system to the virtual machine