[action] [PR:1084] Add pop batch size support for ZMQ Consumer#67
Merged
mssonicbld merged 1 commit intoAzure:202506from Oct 9, 2025
Merged
Conversation
**What i did** Add pop batch size support to ZmqConsumerState Table to optimize memory and increase the speed for updating CRM counters/DASH Feedback when applying dash configuration at scale Example: Let's say we have a GNMI server which pushed X entries to orchagent. Current logic of ZmqConsumerState table would move X entries to m_toSync map. Dashorch would create X entries in bulker. However, max_bulk size is often limited (currently 1000) And definitely much less than the size of m_toSync in this scale scenario. So, effective memory during this time is `2 * X (1 copy in m_toSync + 1 copy in bulker)* size per object `until all those entries are applied to ASIC. 1. With this change, only pop batch size entries are popped out to m_toSync and added to bulker. Thus peak memory utilization is cut in half in case of Dash Scale. 2. Another side effect of this change is the postprocessing for pop batch size items is done immediately in orchagent and there is no delay on updating CRM or GNMI Feedback loop. If not, post processing starts only after all the entries in m_toSync are applied to syncd which is not capped for current design **How i verified** UT and applying DASH config and making sure everything works Before the update: ``` [ RUN ] ZmqConsumerStateTablePopSize.test Consumer thread started Entering select Producer sent 150 elements pops: 150 Consumer thread joined tests/zmq_state_ut.cpp:636: Failure Expected equality of these values: popCount Which is: 1 4 popCount: 1, expected: 4 tests/zmq_state_ut.cpp:639: Failure Expected equality of these values: recvdSizes[i] Which is: 150 expectedSizes[i] Which is: 40 recvdSizes[0]: 150, expected: 40 [ FAILED ] ZmqConsumerStateTablePopSize.test (16017 ms) ``` After the update: ``` [----------] 1 test from ZmqConsumerStateTablePopSize [ RUN ] ZmqConsumerStateTablePopSize.test Consumer thread started Entering select Producer sent 150 elements pops: 40 Entering select pops: 40 Entering select pops: 40 Entering select pops: 30 Consumer thread joined [ OK ] ZmqConsumerStateTablePopSize.test (2008 ms) [----------] 1 test from ZmqConsumerStateTablePopSize (2008 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test suite ran. (2012 ms total) [ PASSED ] 1 test. ```
Collaborator
Author
|
Original PR: sonic-net/sonic-swss-common#1084 |
Collaborator
Author
|
/azp run |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What i did
Add pop batch size support to ZmqConsumerState Table to optimize memory and increase the speed for updating CRM counters/DASH Feedback when applying dash configuration at scale
Example:
Let's say we have a GNMI server which pushed X entries to orchagent. Current logic of ZmqConsumerState table would move X entries to m_toSync map.
Dashorch would create X entries in bulker. However, max_bulk size is often limited (currently 1000) And definitely much less than the size of m_toSync in this scale scenario.
So, effective memory during this time is
2 * X (1 copy in m_toSync + 1 copy in bulker)* size per objectuntil all those entries are applied to ASIC.With this change, only pop batch size entries are popped out to m_toSync and added to bulker. Thus peak memory utilization is cut in half in case of Dash Scale.
Another side effect of this change is the postprocessing for pop batch size items is done immediately in orchagent and there is no delay on updating CRM or GNMI Feedback loop. If not, post processing starts only after all the entries in m_toSync are applied to syncd which is not capped for current design
How i verified
UT and applying DASH config and making sure everything works
Before the update:
After the update: