-
-
Notifications
You must be signed in to change notification settings - Fork 176
fix!(core): Make HubSwitchGuard !Send to prevent thread corruption #957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: szokeasaurusrex/envelope-into_items
Are you sure you want to change the base?
fix!(core): Make HubSwitchGuard !Send to prevent thread corruption #957
Conversation
21b407b to
1dea844
Compare
e78483a to
8abf004
Compare
8abf004 to
1255c8a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
lcian
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
0976b79 to
e4ddd01
Compare
|
This current implementation is still flawed; if the same span is re-entered in the same thread (possible in async contexts), the second entry overwrites the original HubSwitchGuard, corrupting the behavior. I am working on a fix |
|
@szokeasaurusrex it rewrites the original guard with the same one I think, so what's the issue there? |
|
I've tested some scenarios manually and didn't find any issues. I assume you're also testing manully. |
|
@lcian Codex AI agent identified the potential issue here as I was working on #946. I have a test locally which reproduces the behavior. Will commit it with a more detailed explanation of the problem tomorrow 👍 I suppose it's possible that Codex is wrong and the local test is doing something which is not supposed to ever happen (the scenario is admittedly a bit contrived), but in any case, I think it's worth it to investigate properly. So, that is what I'm doing now. I will let you know if I need any assistance |
HubSwitchGuard manages thread-local hub state but was Send, allowing it to be moved to another thread. When dropped on the wrong thread, it would corrupt that thread's hub state instead of restoring the original thread. To fix this, add PhantomData<MutexGuard<'static, ()>> to make the guard !Send while keeping it Sync. This prevents the guard from being moved across threads at compile time. Additionally, refactor sentry-tracing to store guards in thread-local storage keyed by span ID instead of in span extensions. This fixes a related bug where multiple threads entering the same span would clobber each other's guards. Fixes #943 Refs RUST-130 Co-Authored-By: Claude <noreply@anthropic.com>
e4ddd01 to
6169b08
Compare
szokeasaurusrex
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have made some pretty substantial changes here. Would appreciate a re-review.
Please also see the updated description 🙏
| { | ||
| let _enter_b = span_b.enter(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lcian This reproduces the bug I was talking about. With the original implementation, span_a and span_b ended up as two separate transactions because dropping _reenter_a restored the original hub. There were two underlying issues here that were causing this behavior; I fixed both of them here.
The first fix was that I changed the SPAN_GUARDS so that it would not just have a single guard per span, but rather, to have a stack of guards that are pushed on each entry and popped on each exit. This way, we only restored the original hub on the final exit from the span.
That first fix was only part of the problem, though. To get the span structure correct here, I also needed to fix #946. For that, we needed to be forking the hub, as you suggested last week. Otherwise, if we just use the hub on the span directly, we end up not actually getting proper scope isolation.
| data.hub.configure_scope(|scope| { | ||
| scope.set_span(data.parent_sentry_span.clone()); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that we are forking the hub, I actually don't think we need to be setting the parent span on the hub here; the hub we restore should already have the parent span set (unless some other code intentionally changed the span on that hub, which we probably should respect).
| data.hub.configure_scope(|scope| { | |
| scope.set_span(data.parent_sentry_span.clone()); | |
| }); |
What do you think @lcian? Am I missing something?
PR is substantially change, should be re-reviewed
Description
This PR does three important things, which are all interdependent, so I think it makes sense to do it all in a single PR.
(1) Making
HubSwitchGuard!SendThis PR makes
HubSwitchGuard!Sendby addingPhantomData<MutexGuard<'static, ()>>while keeping itSync. The type system now prevents the guard from being moved across threads at compile time.This change is important because
HubSwitchGuardmanages thread-local hub state, but was previouslySend, allowing it to be moved to another thread. When dropped on the wrong thread, it could corrupt that thread's hub state instead of restoring the original thread. This change resolves #943.Tests related to this change:
!Sendis enforced by the compiler, so there are no additional tests here.(2) A new stack for spans to manage
HubSwitchGuardsAs
HubSwitchGuardis now!Send, we needed a new way to manage theHubSwitchGuardassociated with a given span, in thetracingintegration. Previously, we just had the guards live directly on the span via theSentrySpanDatatype, but this no longer works, since spans need to beSend. So, we now declare a thread-local mapping from span IDs toHubSwitchGuards. The guards are stored in a stack, as each span will have one guard per span entry (spans can be entered multiple times, even on the same thread, and without the stack, we would restore the original hub too early). We drop the guards on span exit in LIFO order.Tests related to this change:
span_reentrancy.rscontains tests to validate correct span tree structure when re-entering the same span multiple times. A previous iteration of this PR only allowed a single guard per span; the test failed against this implementation because it produces an incorrect span structure (two transactions instead of one). The test only passes with the guard stack.(3) Forking the
Hubon each span (re-)entryChange (2) is insufficient to make the
span_reentrancy.rstest pass because with only that change, there is still the fundamental problem that each span does not get its ownHubto manage state. Thus, in that test, I believe we were actually only ever using one hub, because there was no place we were forking the hub. So, on span exit, we prematurely set the parent span on the hub.Forking the hub ensures proper isolation, so the span gets set back at the right time (I also suspect we don't need to manually set it back, but I am unsure).
This change resolves #946.
Tests related to this change:
span_cross_thread.rsensures that entering the same span in multiple threads produces the correct span tree. Basically, it is a reproduction of #946.span_reentrancy.rsalso only passes with this change.Issues