-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(room): add Room::remove_outdated_seen_knock_requests_ids
method
#4422
Conversation
pub enum RoomMembersUpdateKind { | ||
/// The whole list room members was reloaded. | ||
FullReload, | ||
/// A few members were updated, their user ids are included. | ||
Partial(BTreeSet<OwnedUserId>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this distinction is not used at the moment, I hope it'll make building a live room members observable in the future a bit easier.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #4422 +/- ##
==========================================
+ Coverage 85.41% 85.44% +0.02%
==========================================
Files 283 283
Lines 31490 31528 +38
==========================================
+ Hits 26897 26938 +41
+ Misses 4593 4590 -3 ☔ View full report in Codecov by Sentry. |
@@ -1167,4 +1167,79 @@ mod tests { | |||
|
|||
Ok(()) | |||
} | |||
|
|||
#[async_test] | |||
#[allow(dependency_on_unit_never_type_fallback)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we get rid of it, or comment why it's needed?
crates/matrix-sdk-base/src/client.rs
Outdated
if let Err(err) = room.room_member_updates_sender.send(RoomMembersUpdateKind::FullReload) { | ||
warn!("Failed to broadcast full room members reload: {err}"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto here.
@@ -262,6 +265,15 @@ fn heroes_filter<'a>( | |||
move |user_id| user_id != own_user_id && !member_hints.service_members.contains(user_id) | |||
} | |||
|
|||
/// The kind of room member updates that just happened. | |||
#[derive(Debug, Clone)] | |||
pub enum RoomMembersUpdateKind { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be called RoomMembersUpdate
instead?
@@ -297,6 +309,7 @@ impl Room { | |||
))), | |||
room_info_notable_update_sender, | |||
seen_knock_request_ids_map: SharedObservable::new_async(None), | |||
room_member_updates_sender: broadcast::channel(10).0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use local variables instead? It's really not clear what the 0 is, so before assigning the sender here, you'd have a line above like let (sender, _receiver) = broadcast::channel(10);
, and then use sender
here.
@@ -174,6 +174,9 @@ pub struct Room { | |||
/// user has marked as seen so they can be ignored. | |||
pub seen_knock_request_ids_map: | |||
SharedObservable<Option<BTreeMap<OwnedEventId, OwnedUserId>>, AsyncLock>, | |||
|
|||
/// A sender that will notify receivers when room member updates happen. | |||
pub room_member_updates_sender: broadcast::Sender<RoomMembersUpdateKind>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is pub
, it's part of the public API, and it must be tested that it works correctly when subscribing 😈
// We're not calling `get_seen_join_request_ids` here because we need to keep | ||
// the Mutex's guard until we've updated the data | ||
let mut current_seen_events = if current_seen_events_guard.is_none() { | ||
self.load_cached_knock_request_ids().await? | ||
} else { | ||
current_seen_events_guard.clone().unwrap() | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think this is the third duplication of this piece of code, can we refactor it into a small helper?
// If there is nothing to compare the current seen knock ids to, return early | ||
if not_knocked_user_ids.is_empty() { | ||
return Ok(()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you put this code before we take the seen_knock_request_ids_map
mutex?
let mut to_remove = Vec::new(); | ||
for (event_id, user_id) in current_seen_events.iter() { | ||
if not_knocked_user_ids.contains(user_id) { | ||
to_remove.push(event_id.to_owned()); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the approach you've implemented works, but it does a lot of work when the room is large and when there's few members who knocked. Instead, could we iterate over all the user ids from the current_seen_events knock events, get their current membership from the store, and remove them if their own membership has changed?
Now that will mean that instead of loading N memberships (when N is the number of members in the room), we'll only load as many as the number of members who knocked.
06c4332
to
8f23be3
Compare
I added fixes to your comments and as you suggested I reorganised the commits (I hope they're easier to review now 🤞 ). I also added a couple of more isolated tests for |
8f23be3
to
9cf7e91
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the splitting of commits, it helps a lot with the reviewing 🙏
I added a suggestion for not having to reload all member events from the database, as I think it can work.
room.room_member_updates_sender | ||
.send(RoomMembersUpdate::FullReload) | ||
.expect("broadcasting a room members update failed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should send something manually; instead, we should test that the described behavior (will be pinged upon /members or incremental room member update) is correct. So we'd need two tests:
- when I'm calling /members, then I get a
FullReload
- when I'm receiving new member info from sync, then I get
Partial
with the correct user id set
Does it make sense?
// We're not calling `get_seen_join_request_ids` here because we need to keep | ||
// the Mutex's guard until we've updated the data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think this comment can go now?
let raw_member_events: Vec<RawMemberEvent> = | ||
self.store.get_state_events_for_keys_static(self.room_id(), &keys).await?; | ||
let member_events = raw_member_events | ||
.into_iter() | ||
.map(|raw| raw.deserialize()) | ||
.collect::<Result<Vec<MemberEvent>, _>>()?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit unfortunate that we have to load all the member events from the database systematically, only to filter them out later. The base crate has BaseRoom::get_member(user_id)
and StateStore::get_member_event(user_id)
, so we can do better. In pseudo-code:
for each (event_id, user_id) in current_seen_events {
let member = store.get_member(room_id, user_id);
if member.state is not knocked {
remove it from current_seen_events
}
}
And that way, we don't have to preload all the member events from the database. Does it make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But wouldn't that mean we now need to do N queries instead of a single one? I had that code before (with store.get_member_event
) and I thought it wasn't as performant as it could be:
- With
store.get_member_event
you make several queries. - With
room.get_member
you make several queries and also load the presence and other data that's not necessary at this point.
The keys
passed are the user ids of the seen knock requests, so I don't think we're doing any extra work vs loading the members one by one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The keys passed are the user ids of the seen knock requests, so I don't think we're doing any extra work vs loading the members one by one.
Alright, I missed that, then we should be good to go 👍
a6b3976
to
ec1c806
Compare
ec1c806 should take care of those issues. Sorry I force pushed, I realised I had messed up a doc comment just after uploading the new commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks great!
This sender will notify receivers when new room members are received: this can happen either when reloading the full room member list from the HS or when new member events arrive during a sync. The sender will emit a `RoomMembersUpdate`, which can be either a full reload or a partial one, including the user ids of the members that changed.
…rent seen knock request ids while keeping them thread-safe with a lock around them
This will check the current seen knock request ids against the room members related to them and will remove those seen ids for members which are no longer in knock state or come from an outdated knock member event.
This cleanup task will run while the knock request subscription runs and will use the `Room::room_member_updates_sender` notification to call `Room::remove_outdated_seen_knock_requests_ids` and remove outdated seen knock request ids automatically.
ec1c806
to
f53a16d
Compare
In a previous PR (#4338) we introduced the concept of a 'seen' knock request id, that is, a member event with knock membership that the current user has seen and decided to explicitly ignore so it can be filtered out in the future in the clients and won't clutter the UI. However, if left alone this list will only grow indefinitely, so we need a way to observe the changes in knock room members in the room and remove outdated seen request ids.
Take this PR as a follow-up of the one mentioned above.
A new
matrix_sdk_base::room_member_updates_sender
field was added too, which will broadcast when a room members update is received, both when loading the room members from the HS and when new member state events are received in a sync.On
fn Room::subscribe_to_knock_requests
a new cleanup task is also spawned to listen to these broadcast and call thefn Room::remove_outdated_seen_knock_requests_ids
, which will compare the existing seen knock request ids with the room members having a knock membership state in the room and discard the seen knock request ids that are no longer valid.Signed-off-by: