Skip to content

Conversation

@hawkw
Copy link
Owner

@hawkw hawkw commented Sep 20, 2025

see #88 #500 etc

Comment on lines +5 to +6
/// A spinlock that also provides mutual exclusion against the interrupt handler
/// for a particular interrupt vector.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's particularly important that acquiring IrqSpinlock means you can modify the vector however you want, but it does not ensure the interrupt handler itself is not also running. i don't think we need that kind of exclusivity much (at all?) but it might be worth being more precise here.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, you mean that the ISR might have already been entered on another thread when the interrupt gets masked, so you don't actually have mutual exclusion from the ISR, just from new calls to the ISR? yeah. hmm. i actually think we probably do need to handle that somehow, which is pretty annoying...

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the obvious solution would be to have an atomic counter for each interrupt vector that's incremented when a handler is entered and decremented when it returns, and make the caller do this sequence:

  1. acquire spinlock (mutual exclusion against other non-ISR code)
  2. mask IRQ (mutual exclusion against new ISRs being entered)
  3. spin until ISR count is 0 (to wait for currently executing ISRs to finish)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... as long as you're not acquiring the lock while in the ISR... which seems like a bug to me

personally i think it's fine to leave mutual exclusion with the ISR running anywhere as an exercise for the reader as long as the docs are clear that that's how the responsibilities go. everything else seems like a question of how far out of your way you want to go to say "this is a bug, stop doing that"?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... as long as you're not acquiring the lock while in the ISR... which seems like a bug to me

yeah, the docs for this thing will (eventually) state that the lock API is not intended to be used in ISRs and trying to acquire it inside an ISR will deadlock.

personally i think it's fine to leave mutual exclusion with the ISR running anywhere as an exercise for the reader as long as the docs are clear that that's how the responsibilities go. everything else seems like a question of how far out of your way you want to go to say "this is a bug, stop doing that"?

i think we actually do need to worry about this. the reason for having an IRQ lock type at all is to be able to share non-atomic mutable state between the ISR and non-ISR code in a driver. we don't know at what point the interrupt handler function in the ISR will access the shared state; it might touch it at any point before returning from the ISR. so, we really cannot let non-ISR code access the state inside the lock unless we are sure that no ISRs are running and no new ISRs will be entered as long as the lock is held. luckily i think the ISR entry counter thingy is not too terrible...

Comment on lines 5 to +6
pub use self::ctx::Context;
pub use self::lock::IrqRawMutex;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/IrqRawMutex/IrqSpinlock?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants