-
Notifications
You must be signed in to change notification settings - Fork 130
perf: Optimize telemetry latency logging to reduce overhead #715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
876ed88
0687a29
695ea7f
9f19c88
c76610e
b97359d
581394a
564827f
7a779c7
4a0f81b
ac4bc8e
77bac01
d6e5d85
f98d22a
648aa96
b793967
2bbc3e6
37d1162
97e2bea
91012ad
82b74a5
d8e47db
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -165,8 +165,9 @@ def get_instance(cls, connection: "Connection") -> FeatureFlagsContext: | |
| cls._initialize() | ||
| assert cls._executor is not None | ||
|
|
||
| # Use the unique session ID as the key | ||
| key = connection.get_session_id_hex() | ||
| # Cache at HOST level - share feature flags across connections to same host | ||
| # Feature flags are per-host, not per-session | ||
| key = connection.session.host | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this similar to how we cache feature flag values in Java?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we test this change via unit tests?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, this is the key we use in java now (updated after the RL issue). Let me also add UTs. Thanks |
||
| if key not in cls._context_map: | ||
| cls._context_map[key] = FeatureFlagsContext( | ||
| connection, cls._executor, connection.session.http_client | ||
|
|
@@ -177,7 +178,8 @@ def get_instance(cls, connection: "Connection") -> FeatureFlagsContext: | |
| def remove_instance(cls, connection: "Connection"): | ||
| """Removes the context for a given connection and shuts down the executor if no clients remain.""" | ||
| with cls._lock: | ||
| key = connection.get_session_id_hex() | ||
| # Use host as key to match get_instance | ||
| key = connection.session.host | ||
| if key in cls._context_map: | ||
| cls._context_map.pop(key, None) | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please share your findings on what improved post this change? Do we see a memory drop after caching it at host level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is in-general optimisation - Replicating what we have in jdbc. I did not test memory changes post this