Proposal: Improve rx_q
s by putting a cap on the amount of packets they hold
#84415
Labels
RFC
Request For Comments: want input from the community
Introduction
Zephyr has a system called
net_tc
. It allows to use different amount of threads, calledrx_q
s andtx_q
s, with different priorities to work on incoming packets with different packet-priorities. The driver needs to tag the packets with the priorities. The idea is, that under heavy load, more important packets are still handled while lower priority packets are not handled anymore. By default zephyr enables a singlerx_q
. In the following I am arguing, that enabling more then one does not currently make sense and proposing changes so that it does. Maybe I missed something, if so please let me know.Problem description
My understanding is, that the packets are distributed to the multiple queues roughly in the following way. L2, L3, L4 stand for the respective layers of the network stack following the OSI-model.
The idea of such an arrangement is, that in scenarios with heavy load the
rx_q[1]
may be blocked/slow while therx_q[0]
may still be fully operational. Doing so ensures, that packets with lower priority are dropped while packets with higher priority are operated upon.The problem: both packets are allocated from the same pool irrespective of their priority (unless, I missed something, if so let me know). This means, that if the
rx_q[1]
is full. There will be no way to allocate a high priority packet and hand it off torx_q[0]
until the packet is freed byrx_q[1]
.rx_q[1]
in effect dictates the pace ofrx_q[0]
rendering above arrangement useless.Proposed change
Put a system in place, that ensures, that there will always be space to allocate high priority packets, even when there is no more space for low priority packets.
Detailed RFC
Put a fixed maximum cap on the amount of packets in each queue. For example use
CONFIG_NET_PKT_RX_COUNT/CONFIG_NET_TC_RX_COUNT
for an even distribution. Use something else instead ofk_fifo
innet_tc
, that tracks the amount of packets. If the queue is full drop the incoming packet. This is probably the best, since the driver need not even be aware of the priority of the packet they allocate.Proposed change (Detailed)
net_tc
go fromk_fifo
to an alternative, that tracks the count. For example amsgq
?rx_q
Dependencies
TBD
Concerns and Unresolved Questions
TBD
Alternatives
pkt
andbuf
slabs for eachrx_q
rx_q
andtx_q
to not confuse the audience about handling of multiple priorities.The text was updated successfully, but these errors were encountered: