-
Notifications
You must be signed in to change notification settings - Fork 0
FreeBSD HyperV X540-AT2 Virtual Function Driver if_ixv 1.5.33
License
alastorid/if_ixv
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
FreeBSD Driver for Intel(R) Ethernet 10 Gigabit PCI Express Server Adapters ===Modified 1.5.33 for FreeBSD on HyperV, use at your own risk============== X540-AT2 VF 8086 1530 FreeBSD 13.2-RELEASE March 10, 2021 Current progress: ixv0 is shown virtmgmt nic status "OK (SR-IOV active)" PR as hard as you can. ~~~~Notes to myself~~~ fetch https://raw.githubusercontent.com/alastorid/if_ixv/main/a.sh chmod +x a.sh ./a.sh ~~~~Notes to myself~~~ Contents ======== - Overview - Identifying Your Adapter - The VF Driver - Building and Installation - Configuration and Tuning - Known Issues/Troubleshooting - Support Overview ======== This file describes the FreeBSD* driver for Intel(R) Ethernet. This driver has been developed for use with all community-supported versions of FreeBSD. For questions related to hardware requirements, refer to the documentation supplied with your Intel Ethernet Adapter. All hardware requirements listed apply to use with FreeBSD. NOTE: Devices based on the Intel(R) Ethernet Controller X552 and Intel(R) Ethernet Controller X553 do not support the following features: * Low Latency Interrupts (LLI) The associated Virtual Function (VF) driver for this driver is ixv. Identifying Your Adapter ======================== The driver is compatible with devices based on the following: * Intel(R) Ethernet Controller 82598 * Intel(R) Ethernet Controller 82599 * Intel(R) Ethernet Controller X520 * Intel(R) Ethernet Controller X540 * Intel(R) Ethernet Controller x550 * Intel(R) Ethernet Controller X552 * Intel(R) Ethernet Controller X553 For information on how to identify your adapter, and for the latest Intel network drivers, refer to the Intel Support website: http://www.intel.com/support SFP+ Devices with Pluggable Optics ---------------------------------- 82599-BASED ADAPTERS -------------------- NOTES: - If your 82599-based Intel(R) Network Adapter came with Intel optics or is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel optics and/or the direct attach cables listed below. - When 82599-based SFP+ devices are connected back to back, they should be set to the same Speed setting via ethtool. Results may vary if you mix speed settings. Supplier Type Part Numbers -------- ---- ------------ SR Modules Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2 Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1 LR Modules Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2 Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1 The following is a list of 3rd party SFP+ modules that have received some testing. Not all modules are applicable to all devices. Supplier Type Part Numbers -------- ---- ------------ Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1 Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1 Finisar 1000BASE-T SFP FCLF8522P2BTL Avago 1000BASE-T ABCU-5710RZ HP 1000BASE-SX SFP 453153-001 82599-based adapters support all passive and active limiting direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Turning the laser off or on for SFP+ ------------------------------------ "ifconfig ixX down" turns off the laser for 82599-based SFP+ fiber adapters. "ifconfig ixX up" turns on the laser. 82599-based QSFP+ Adapters -------------------------- NOTES: - If your 82599-based Intel(R) Network Adapter came with Intel optics, it only supports Intel optics. - 82599-based QSFP+ adapters only support 4x10 Gbps connections. 1x40 Gbps connections are not supported. QSFP+ link partners must be configured for 4x10 Gbps. - 82599-based QSFP+ adapters do not support automatic link speed detection. The link speed must be configured to either 10 Gbps or 1 Gbps to match the link partners speed capabilities. Incorrect speed configurations will result in failure to link. - Intel(R) Ethernet Converged Network Adapter X520-Q1 only supports the optics and direct attach cables listed below. Supplier Type Part Numbers -------- ---- ------------ Intel DUAL RATE 1G/10G QSFP+ SRL (bailed) E10GQSFPSR 82599-based QSFP+ adapters support all passive and active limiting QSFP+ direct attach cables that comply with SFF-8436 v4.1 specifications. 82598-BASED ADAPTERS -------------------- NOTES: - Intel(r) Ethernet Network Adapters that support removable optical modules only support their original module type (for example, the Intel(R) 10 Gigabit SR Dual Port Express Module only supports SR optical modules). If you plug in a different type of module, the driver will not load. - Hot Swapping/hot plugging optical modules is not supported. - Only single speed, 10 gigabit modules are supported. - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module types are not supported. Please see your system documentation for details. The following is a list of SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices. Supplier Type Part Numbers -------- ---- ------------ Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL 82598-based adapters support all passive direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables are not supported. Third party optic modules and cables referred to above are listed only for the purpose of highlighting third party specifications and potential compatibility, and are not recommendations or endorsements or sponsorship of any third party's product by Intel. Intel is not endorsing or promoting products made by any third party and the third party reference is provided only to share information regarding certain optic modules and cables with the above specifications. There may be other manufacturers or suppliers, producing or supplying optic modules and cables with similar or matching descriptions. Customers must use their own discretion and diligence to purchase optic modules and cables from any third party of their choice. Customers are solely responsible for assessing the suitability of the product and/or devices and for the selection of the vendor for purchasing any product. THE OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS. The VF Driver ============= In the FreeBSD guest, the ixv driver would be loaded and will function using the VF device assigned to it. The VF driver provides most of the same functionality as the core driver, but is actually a subordinate to the host. Access to many controls is accomplished by a request to the host via what is called the "Admin queue." These are startup and initialization events, however; once in operation, the device is self-contained and should achieve near native performance. Some notable limitations of the VF environment: * For security reasons, the driver is never permitted to be promiscuous, therefore a tcpdump will not behave the same with the interface. * Media info is not available from the PF, so it will always appear as auto. Building and Installation ========================= NOTE: This driver package is to be used only as a standalone archive and the user should not attempt to incorporate it into the kernel source tree. In the instructions below, x.x.x is the driver version as indicated in the name of the driver tar file. 1. Move the base driver tar file to the directory of your choice. For example, use /home/username/ix or /usr/local/src/ix. 2. Untar/unzip the archive: # tar xzf ix-x.x.x.tar.gz This will create the ix-x.x.x directory. 3. To install man page: # cd ix-x.x.x # gzip -c ix.4 > /usr/share/man/man4/ix.4.gz 4. To load the driver onto a running system: # cd ix-x.x.x/src # make # kldload ./if_ix.ko 5. To assign an IP address to the interface, enter the following, where X is the interface number for the device: # ifconfig ixX <IP_address> 6. Verify that the interface works. Enter the following, where <IP_address> is the IP address for another machine on the same subnet as the interface that is being tested: # ping <IP_address> 7. If you want the driver to load automatically when the system is booted: # cd ix-x.x.x/src # make # make install Edit /boot/loader.conf, and add the following line: if_ix_load="YES" Edit /etc/rc.conf, and create the appropriate ifconfig_ixX entry: ifconfig_ixX="<ifconfig_settings>" Example usage: ifconfig_ix0="inet 192.168.10.1 netmask 255.255.255.0" NOTE: For assistance, see the ifconfig man page. Speed and Duplex Configuration ------------------------------ In addressing speed and duplex configuration issues, you need to distinguish between copper-based adapters and fiber-based adapters. In the default mode, an Intel(R) Ethernet Network Adapter using copper connections will attempt to auto-negotiate with its link partner to determine the best setting. If the adapter cannot establish link with the link partner using auto-negotiation, you may need to manually configure the adapter and link partner to identical settings to establish link and pass packets. This should only be needed when attempting to link with an older switch that does not support auto-negotiation or one that has been forced to a specific speed or duplex mode. Your link partner must match the setting you choose. 1 Gbps speeds and higher cannot be forced. Use the autonegotiation advertising setting to manually set devices for 1 Gbps and higher. Caution: Only experienced network administrators should force speed and duplex or change autonegotiation advertising manually. The settings at the switch must always match the adapter settings. Adapter performance may suffer or your adapter may not operate if you configure the adapter differently from your switch. An Intel(R) Ethernet Network Adapter using fiber-based connections, however, will not attempt to auto-negotiate with its link partner since those adapters operate only in full duplex and only at their native speed. By default, the adapter auto-negotiates the speed and duplex of the connection. If there is a specific need, the ifconfig utility can be used to configure the speed and duplex settings on the adapter. NOTE: For the Intel(R) Ethernet Connection X552 10 GbE SFP+ you must specify the desired speed. Example usage: # ifconfig ixX <IP_address> media 100baseTX mediaopt full-duplex NOTE: Only use mediaopt to set the driver to full-duplex. If mediaopt is not specified and you are not running at gigabit speed, the driver defaults to half-duplex. If the interface is currently forced to 100 full duplex, you must use this command to change to half duplex: # ifconfig ixX <IP_address> media 100baseTX -mediaopt full-duplex This driver supports the following media type options: Media Type Description ---------- ----------- autoselect Enables auto-negotiation for speed and duplex. 10baseT/UTP Sets speed to 10 Mbps. Use the ifconfig mediaopt option to select full-duplex mode. 100baseTX Sets speed to 100 Mbps. Use the ifconfig mediaopt option to select full-duplex mode. 1000baseTX Sets speed to 1000 Mbps. In this case, the driver supports only full-duplex mode. 1000baseSX Sets speed to 1000 Mbps. In this case, the driver supports only full-duplex mode. For more information on the ifconfig utility, refer to the ifconfig man page. Configuration and Tuning ======================== Important System Configuration Changes -------------------------------------- - Change the file /etc/sysctl.conf, and add the line: hw.intr_storm_threshold: 0 (the default is 1000) - Best throughput results are seen with a large MTU; use 9710 if possible. The default number of descriptors per ring is 1024. Increasing this may improve performance, depending on your use case. - If you have a choice, run on a 64-bit OS rather than a 32-bit OS. Jumbo Frames ------------ Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) to a value larger than the default value of 1500. Use the ifconfig command to increase the MTU size. For example, enter the following where X is the interface number: # ifconfig ixX mtu 9000 To confirm an interface's MTU value, use the ifconfig command. To confirm the MTU used between two specific devices, use: # route get <destination_IP_address> NOTE: The maximum MTU setting for jumbo frames is 9710. This corresponds to the maximum jumbo frame size of 9728 bytes. NOTE: This driver will attempt to use multiple page sized buffers to receive each jumbo packet. This should help to avoid buffer starvation issues when allocating receive packets. NOTE: Packet loss may have a greater impact on throughput when you use jumbo frames. If you observe a drop in performance after enabling jumbo frames, enabling flow control may mitigate the issue. NOTE: For 82599-based network connections, if you are enabling jumbo frames in a virtual function (VF), jumbo frames must first be enabled in the physical function (PF). The VF MTU setting cannot be larger than the PF MTU. VLANS ----- To create a new VLAN interface: # ifconfig <vlan_name> create To associate the VLAN interface with a physical interface and assign a VLAN ID, IP address, and netmask: # ifconfig <vlan_name> <ip_address> netmask <subnet_mask> vlan <vlan_id> vlandev <physical_interface> Example: # ifconfig vlan10 10.0.0.1 netmask 255.255.255.0 vlan 10 vlandev ix0 In this example, all packets will be marked on egress with 802.1Q VLAN tags, specifying a VLAN ID of 10. To remove a VLAN interface: # ifconfig <vlan_name> destroy Checksum Offload ---------------- Checksum offloading supports both TCP and UDP packets and is supported for both transmit and receive. Checksum offloading can be enabled or disabled using ifconfig. Both transmit and receive offloading will be either enabled or disabled together. You cannot enable/disable one without the other. To enable checksum offloading: # ifconfig ixX rxcsum To disable checksum offloading: # ifconfig ixX -rxcsum To confirm the current setting: # ifconfig ixX Look for the presence or absence of the following line: options=3 <RXCSUM,TXCSUM> See the ifconfig man page for further information. TSO --- TSO (TCP Segmentation Offload) supports both IPv4 and IPv6. TSO can be disabled and enabled using the ifconfig utility or sysctl. NOTE: TSO requires Tx checksum, if Tx checksum is disabled, TSO will also be disabled. To enable/disable TSO in the stack: # sysctl net.inet.tcp.tso=0 (or 1 to enable it) Doing this disables/enables TSO in the stack and affects all installed adapters. To disable BOTH TSO IPv4 and IPv6, where X is the number of the interface in use: # ifconfig ixX -tso To enable BOTH TSO IPv4 and IPv6: # ifconfig ixX tso You can also enable/disable IPv4 TSO or IPv6 TSO individually. Simply replace tso|-tso in the above command with tso4 or tso6. For example, to disable TSO IPv4: # ifconfig ixX -tso4 To disable TSO IPv6: # ifconfig ixX -tso6 LRO --- LRO (Large Receive Offload) may provide Rx performance improvement. However, it is incompatible with packet-forwarding workloads. You should carefully evaluate the environment and enable LRO when possible. To enable: # ifconfig ixX lro It can be disabled by using: # ifconfig ixX -lro Link-Level Flow Control (LFC) ----------------------------- Ethernet Flow Control (IEEE 802.3x) can be configured with sysctl to enable receiving and transmitting pause frames for ix. When transmit is enabled, pause frames are generated when the receive packet buffer crosses a predefined threshold. When receive is enabled, the transmit unit will halt for the time delay specified when a pause frame is received. NOTE: You must have a flow control capable link partner. Flow Control is enabled by default. The ix driver also supports a "hw.ix.flow_control" load-time parameter to set the initial flow control configuration on all supported interfaces in a system. You can set the tunable using the kenv command and then reload the driver, or by adding a line to /boot/loader.conf and rebooting the system. Use sysctl to change the flow control settings for a single interface without reloading the driver. The available values for flow control are: 0 = Disable flow control 1 = Enable Rx pause 2 = Enable Tx pause 3 = Enable Rx and Tx pause Examples: - To set the tunable in the config file, add the following line to /boot/loader.conf and then reboot the system: hw.ix.flow_control=3 - To set the tunable using kenv, use the following command and then reload the driver: # kenv hw.ix.flow_control=3 - To remove the tunable and return the system to its default settings: # kenv -u hw.ix.flow_control - To enable a flow control setting with sysctl: # sysctl dev.ix.<interface_num>.fc=3 - To disable flow control using sysctl: # sysctl dev.ix.<interface_num>.fc=0 NOTE: - The ix driver requires flow control on both the port and link partner. If flow control is disabled on one of the sides, the port may appear to hang on heavy traffic. - For 82598 backplane cards entering 1 gigabit mode, flow control default behavior is changed to off. Flow control in 1 gigabit mode on these devices can lead to transmit hangs. - For more information on priority flow control, refer to the "Data Center Bridging (DCB)" section in this README. - The VF driver does not have access to flow control. It must be managed from the host side. DMAC ---- Valid Range: 0, 41-10000 This parameter enables or disables DMA Coalescing feature. Values are in microseconds and set the internal DMA Coalescing internal timer. DMAC is available on Intel(R) X550 (and later) based adapters. DMA (Direct Memory Access) allows the network device to move packet data directly to the system's memory, reducing CPU utilization. However, the frequency and random intervals at which packets arrive do not allow the system to enter a lower power state. DMA Coalescing allows the adapter to collect packets before it initiates a DMA event. This may increase network latency but also increases the chances that the system will enter a lower power state. Turning on DMA Coalescing may save energy. DMA Coalescing must be enabled across all active ports in order to save platform power. InterruptThrottleRate (ITR) should be set to dynamic. When ITR=0, DMA Coalescing is automatically disabled. A guide containing information on how to best configure your platform is available on the Intel website. Known Issues/Troubleshooting ============================ UDP Stress Test Dropped Packet Issue ------------------------------------ Under small packet UDP stress with the ix driver, the system may drop UDP packets due to socket buffers being full. Setting the driver Intel Ethernet Flow Control variables to the minimum may resolve the issue. You may also try increasing the kernel's default buffer sizes by changing the values in /proc/sys/net/core/rmem_default and rmem_max Attempting to configure larger MTUs with a large numbers of processors may generate the error message "ixX: could not setup receive structures" -------------------------------------------------------------------------- When using the ix driver with RSS autoconfigured based on the number of cores (the default setting) and that number is larger than 4, increase the memory resources allocated for the mbuf pool as follows: Add to the sysctl.conf file for the system: kern.ipc.nmbclusters=262144 kern.ipc.nmbjumbop=262144 Lower than expected performance ------------------------------- Some PCIe x8 slots are actually configured as x4 slots. These slots have insufficient bandwidth for full line rate with dual port and quad port devices. In addition, if you put a PCIe v4.0 or v3.0-capable adapter into a PCIe v2.x slot, you cannot get full bandwidth. The driver detects this situation and writes one of the following messages in the system log: "PCI-Express bandwidth available for this card is not sufficient for optimal performance. For optimal performance a x8 PCI-Express slot is required." or "PCI-Express bandwidth available for this device may be insufficient for optimal performance. Please move the device to a different PCI-e link with more lanes and/or higher transfer rate." If this error occurs, moving your adapter to a true PCIe v3.0 x8 slot will resolve the issue. 'no PRT entry for x.x.INTx' error on pcib4 in dmesg --------------------------------------------------- On systems that use legacy interrupts, you may see the following in dmesg: pci4: on pcib4 pcib4: no PRT entry for <x>.0.INTA pcib4: no PRT entry for <x>.0.INTB For example: pci4: on pcib4 pcib4: no PRT entry for 3.0.INTA pcib4: no PRT entry for 3.0.INTB To work around the issue, add the following lines to /boot/loader.conf: hw.pci0.<x>.0.INTA.irq=16 hw.pci0.<x>.0.INTB.irq=17 where <x> is the number found in dmesg message (either 3 or 4). Support ======= For general information, go to the Intel support website at: http://www.intel.com/support/ If an issue is identified with the released source code on a supported kernel with a supported adapter, email the specific information related to the issue to freebsd@intel.com Copyright(c) 1999 - 2021 Intel Corporation. Trademarks ========== Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United States and/or other countries. * Other names and brands may be claimed as the property of others.
About
FreeBSD HyperV X540-AT2 Virtual Function Driver if_ixv 1.5.33
Topics
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published