NIP-46 ack tracking, xpub announce, and security hardening#282
NIP-46 ack tracking, xpub announce, and security hardening#282kwsantiago merged 13 commits intomainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughMemory-safe secret handling (Zeroizing) was added across agent and server crates, a relay switching flow (switch_relays) and request-ID gating were implemented, descriptor coordination gained initiator/participant limits and caps, constant-time checks and API/struct adjustments were applied, and several operational limits were introduced. Changes
Sequence Diagram(s)sequenceDiagram
participant Pending as PendingSession
participant Client as AgentClient
participant Relays as Relays/RelayManager
participant Network as NetworkSocket
Pending->>Pending: send_connect_once() (AtomicBool guard)
Pending->>Network: emit connect event
Network->>Client: connection established
Client->>Client: send connect payload
Client->>Client: switch_relays() invoked
Client->>Relays: request current relays (parse null/string/array)
Relays-->>Client: return Option<Vec<String>>
Client->>Client: update internal relay_url & state
Client-->>Pending: return AgentClient (with up-to-date relays)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
keep-agent-ts/src/lib.rs (1)
70-84:⚠️ Potential issue | 🟠 MajorIntermediate secret copies are not zeroized.
The PR objective states "zeroize intermediate secret copies after use," but
decoded(Vec containing secret bytes) andarr(the intermediate array) are not wrapped inZeroizing. Since[u8; 32]isCopy, the originalarrremains on the stack when passed toZeroizing::new().🔒 Proposed fix to zeroize intermediate copies
- let secret_bytes: Option<Zeroizing<[u8; 32]>> = if let Some(ref sk) = secret_key { - let decoded = hex::decode(sk) + let secret_bytes: Option<Zeroizing<[u8; 32]>> = if let Some(ref sk) = secret_key { + let mut decoded = Zeroizing::new(hex::decode(sk) .map_err(|e| Error::from_reason(format!("Invalid secret key hex: {}", e)))?; + ); if decoded.len() != 32 { return Err(Error::from_reason(format!( "Secret key must be 32 bytes, got {}", decoded.len() ))); } - let mut arr = [0u8; 32]; - arr.copy_from_slice(&decoded); - Some(Zeroizing::new(arr)) + let mut arr = Zeroizing::new([0u8; 32]); + arr.copy_from_slice(&decoded); + Some(arr) } else { None };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-agent-ts/src/lib.rs` around lines 70 - 84, The decoded secret Vec and the intermediate array are not wrapped in Zeroizing, leaving copies on the stack; change the decode and array creation so both are zeroized: replace the hex::decode(...) -> decoded Vec with let decoded = Zeroizing::new(hex::decode(sk)...) (use the same error mapping), check decoded.len(), then create the array as let mut arr = Zeroizing::new([0u8; 32]); arr.copy_from_slice(&decoded); and return Some(arr); thereby ensuring both the decoded bytes and the [u8; 32] intermediate are zeroized; references: secret_bytes, secret_key, Zeroizing::new, hex::decode.keep-agent-py/src/lib.rs (1)
173-184:⚠️ Potential issue | 🟠 MajorWrap the decoded secret vector in
Zeroizingto ensure secure memory cleanup.Line 174's
hex::decode(&sk)returns aVec<u8>containing the secret key bytes. This vector is dropped without being zeroized, leaving plaintext key material in memory. Wrap the result inZeroizing::new()to ensure the intermediate buffer is securely wiped when it goes out of scope.Suggested fix
let secret_bytes: Option<Zeroizing<[u8; 32]>> = if let Some(sk) = secret_key { - let decoded = hex::decode(&sk) + let decoded = Zeroizing::new(hex::decode(&sk) .map_err(|e| PyValueError::new_err(format!("Invalid secret key hex: {}", e)))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-agent-py/src/lib.rs` around lines 173 - 184, The decoded secret Vec from hex::decode must be wrapped in Zeroizing to ensure the intermediate buffer is zeroed; update the block handling secret_key so that the result of hex::decode(&sk) is stored as let decoded = Zeroizing::new(hex::decode(&sk).map_err(|e| PyValueError::new_err(format!("Invalid secret key hex: {}", e)))?); then validate decoded.len(), copy from decoded.as_slice() into the fixed [u8;32] and construct Some(Zeroizing::new(arr)); keep the same error mapping and variable names (secret_key, decoded, secret_bytes, Zeroizing) so the flow and checks are unchanged but the temporary buffer is zeroized.keep-frost-net/src/node/descriptor.rs (2)
789-838:⚠️ Potential issue | 🟡 MinorPrevent duplicate
DescriptorCompleteemissions on replayed ACKs.
DescriptorAckedis now correctly gated byis_new, but at Line 826 completion still emits on anyis_complete == true. Duplicate ACKs after completion can retriggerDescriptorComplete.♻️ Proposed fix
- let (is_new, is_complete, ack_count, expected_acks) = { + let (is_new, should_emit_complete, ack_count, expected_acks) = { let mut sessions = self.descriptor_sessions.write(); let session = sessions .get_session_mut(&payload.session_id) .ok_or_else(|| FrostNetError::Session("unknown descriptor session".into()))?; let is_new = session.add_ack( share_index, payload.descriptor_hash, &payload.key_proof_psbt, )?; ( is_new, - session.is_complete(), + is_new && session.is_complete(), session.ack_count(), session.expected_ack_count(), ) }; @@ - if is_complete { + if should_emit_complete { let sessions = self.descriptor_sessions.read(); if let Some(session) = sessions.get_session(&payload.session_id) { if let Some(desc) = session.descriptor() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-frost-net/src/node/descriptor.rs` around lines 789 - 838, The code currently emits KfpNodeEvent::DescriptorComplete whenever is_complete is true, which allows replayed ACKs to retrigger completion; change the emission to only fire when the ACK caused completion by checking both is_new and is_complete (i.e., require is_new && is_complete) before reading descriptor_sessions and sending KfpNodeEvent::DescriptorComplete; keep the existing use of add_ack, descriptor_sessions, event_tx, and KfpNodeEvent::DescriptorAcked/DescriptorComplete but gate the DescriptorComplete branch on is_new && is_complete.
633-644:⚠️ Potential issue | 🔴 Critical
set_finalizedis not idempotent; retry after send_event failure will stall.At line 203-226 in descriptor_session.rs,
set_finalizedexplicitly checksif self.state != DescriptorSessionState::Proposedand returns an error otherwise. Once the state transitions toFinalizedat line 223, any subsequent call toset_finalizedwill fail with "Can only finalize from Proposed state".In
handle_descriptor_finalize, the sequence at lines 633-643 persists finalization (state →Finalized) before attemptingsend_event. If transport fails at line 642, a retry will re-enter and callset_finalizedagain at line 633, which will now reject the call. This blocks the finalization event from being resent, leaving peers unaware of finalization and stalling the ACK phase.To fix: either make
set_finalizedidempotent (accept repeated calls with the same descriptor), or restructure to send the event before persisting state, or add a separate resend path that doesn't require state transition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-frost-net/src/node/descriptor.rs` around lines 633 - 644, The current flow in handle_descriptor_finalize calls session.set_finalized(...) (which requires state == DescriptorSessionState::Proposed) before self.client.send_event(...), so a transport retry will re-run and fail because set_finalized is not idempotent; fix by making set_finalized idempotent: change DescriptorSession::set_finalized to accept repeated calls when self.state == Finalized if the provided FinalizedDescriptor (external/internal/policy_hash) matches the already-stored values and return Ok(()) instead of error, preserving existing error behavior for conflicting descriptors; alternatively (if you prefer the other approach) move the self.client.send_event(&event).await call to before session.set_finalized so the persisted state transition happens only after a successful send, ensuring retries can re-send the event without a state transition conflict.
🧹 Nitpick comments (4)
keep-frost-net/src/descriptor_session.rs (1)
228-234: Add explicit tests for the newResult<bool>ACK semantics.Current ACK tests unwrap results but don’t verify
true/falsebehavior. Please add coverage for first ACK (true), duplicate ACK (false), and unchanged completion state on duplicate ACK.💡 Suggested test addition
+ #[test] + fn test_add_ack_returns_false_for_duplicate_ack() { + let mut session = test_session(); + let session_id = *session.session_id(); + + let (xpub1, fp1, secret1) = real_xpub(41); + let (xpub2, fp2, _) = real_xpub(42); + let (xpub3, fp3, _) = real_xpub(43); + + session.add_contribution(1, xpub1.clone(), fp1).unwrap(); + session.add_contribution(2, xpub2, fp2).unwrap(); + session.add_contribution(3, xpub3, fp3).unwrap(); + + let external = "tr(frost_key)"; + let internal = "tr(frost_key)/1"; + let policy_hash = [0x11; 32]; + session.set_finalized(FinalizedDescriptor { + external: external.into(), + internal: internal.into(), + policy_hash, + }).unwrap(); + + let hash = descriptor_hash(external, internal, &policy_hash); + let proof1 = sign_proof(&session_id, 1, &xpub1, &secret1); + + assert_eq!(session.add_ack(1, hash, &proof1).unwrap(), true); + assert_eq!(session.add_ack(1, hash, &proof1).unwrap(), false); + assert_eq!(session.ack_count(), 1); + }Also applies to: 246-247, 283-283
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-frost-net/src/descriptor_session.rs` around lines 228 - 234, Add explicit assertions around the new Result<bool> semantics of add_ack: call DescriptorSession::add_ack (or the add_ack method on the session struct) and assert that the first successful ACK returns Ok(true), a repeated ACK with the same share_index/descriptor_hash returns Ok(false), and that the session's completion state (e.g., is_complete or completed flag) does not change after the duplicate ACK. Update the existing tests that unwrap results at the earlier locations (around the add_ack usages referenced) to assert the boolean return value and verify the unchanged completion state on duplicate calls.keep-desktop/src/frost.rs (1)
947-958: Remove duplicatedactive_coordinations.clear()call.Line 947 and Line 958 perform the same clear in the same path; keeping one improves readability without behavior change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-desktop/src/frost.rs` around lines 947 - 958, The code calls active_coordinations.clear() twice in the same control path; remove the duplicate call (keep only one clear call) to improve readability—locate the repeated active_coordinations.clear() invocations in the same block (surrounding frost_reconnect_attempts, frost_reconnect_at, frost_node.lock(), and pending_sign_requests.lock()) and delete the second occurrence so only a single active_coordinations.clear() remains.keep-agent/src/client.rs (2)
193-193: Consider logging switch_relays failures.Silently discarding the
switch_relaysresult could mask connectivity issues with the new relays. While falling back to the original relay is reasonable, logging failures would aid debugging.- let _ = client.switch_relays().await; + if let Err(e) = client.switch_relays().await { + tracing::debug!("switch_relays after approval failed: {e}"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-agent/src/client.rs` at line 193, The call to client.switch_relays().await currently discards any error; change it to capture the Result from the async call and log failures (e.g., via your crate's logger or the existing logging facility) including the error details and context, while preserving the current fallback behavior to the original relay; specifically update the invocation of switch_relays() in keep-agent/src/client.rs so the returned Result is matched/handled and failures are logged (with error/warn level and the error message) instead of being ignored.
439-448: Edge case: client left disconnected if all add_relay calls fail.After
remove_all_relays(), if everyadd_relaycall fails (due to transient network issues, not URL validity), the client will be left with no relays. Consider verifying at least one relay was successfully added before returning success.self.client.disconnect().await; self.client.remove_all_relays().await; + let mut added_any = false; for relay in &valid_relays { - let _ = self.client.add_relay(relay).await; + if self.client.add_relay(relay).await.is_ok() { + added_any = true; + } } + if !added_any { + // Restore original relay as fallback + let _ = self.client.add_relay(&self.relay_url).await; + self.client.connect().await; + return Err(AgentError::Connection("Failed to add any new relays".into())); + } self.client.connect().await;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-agent/src/client.rs` around lines 439 - 448, After calling self.client.disconnect() and self.client.remove_all_relays(), track whether any self.client.add_relay(relay).await calls actually succeed (e.g., count successes or collect successful relays) instead of blindly assuming valid_relays were added; only call self.client.connect(), set self.relay_url = valid_relays[0].clone(), and return Ok(Some(...)) when at least one add_relay succeeded. If zero relays were added, revert to a safe state (for example: re-add previous relays, reconnect to them, or return Ok(None)/an Err) so the client is not left disconnected; update the logic around add_relay/connect/relay_url to use the successful-relays collection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@keep-agent-py/src/lib.rs`:
- Around line 311-312: The hex-encoded secret is created as a plain String via
hex::encode(secret.as_ref()) which leaves plaintext in memory; wrap that
temporary in zeroize::Zeroizing to ensure it is cleared after use. Replace the
direct call to Keys::parse(&hex::encode(...)) with a Zeroizing<String> (e.g.,
let hex = Zeroizing::new(hex::encode(secret.as_ref()))), pass &hex (or
hex.as_str()) into Keys::parse and keep the Zeroizing binding scoped so it is
dropped/zeroed immediately after map_err(to_py_value_err) completes; also add
use zeroize::Zeroizing at the top of the module.
In `@keep-agent-ts/src/lib.rs`:
- Around line 315-318: The code currently builds Keys by hex-encoding the
zeroized secret bytes (hex::encode(secret.as_ref())) which allocates a
non-zeroized String; instead construct a nostr_sdk::prelude::SecretKey directly
from the byte slice using SecretKey::from_slice(secret.as_ref()) and then create
Keys via Keys::new(secret_key), handling map_err(...) the same way;
alternatively, if you must keep the hex path, ensure you wrap the hex string in
a zeroize::Zeroize type and call .zeroize() after creating Keys to avoid leaving
the secret in memory (refer to Keys, SecretKey::from_slice, Keys::new,
hex::encode, and zeroize::Zeroize).
In `@keep-desktop/src/frost.rs`:
- Around line 718-724: When get_frost_node() returns None you mark the setup as
failed but forget to remove the session from active_coordinations, leaking the
coordination slot; before returning (where you currently call
iced::Task::none()) remove the session from active_coordinations (use the same
session_id key) so the coordination is cleaned up; update the block around
get_frost_node() failure to call self.active_coordinations.remove(&session_id)
(or the appropriate remove method) immediately after update_wallet_setup(...)
and before returning.
In `@keep-frost-net/src/node/mod.rs`:
- Around line 1087-1094: The serialized signing share is created as a plain Vec
at signing_share_bytes; wrap it in Zeroizing to avoid a plaintext secret copy
before hashing: replace the direct serialize call with a Zeroizing-wrapped
buffer (e.g. let signing_share_bytes =
Zeroizing::new(key_package.signing_share().serialize());) and then pass
signing_share_bytes.as_slice() into the Sha256 hasher in the block with
hasher.update(...); ensure you use the Zeroizing variable name used here so the
secret is zeroed when it goes out of scope.
---
Outside diff comments:
In `@keep-agent-py/src/lib.rs`:
- Around line 173-184: The decoded secret Vec from hex::decode must be wrapped
in Zeroizing to ensure the intermediate buffer is zeroed; update the block
handling secret_key so that the result of hex::decode(&sk) is stored as let
decoded = Zeroizing::new(hex::decode(&sk).map_err(|e|
PyValueError::new_err(format!("Invalid secret key hex: {}", e)))?); then
validate decoded.len(), copy from decoded.as_slice() into the fixed [u8;32] and
construct Some(Zeroizing::new(arr)); keep the same error mapping and variable
names (secret_key, decoded, secret_bytes, Zeroizing) so the flow and checks are
unchanged but the temporary buffer is zeroized.
In `@keep-agent-ts/src/lib.rs`:
- Around line 70-84: The decoded secret Vec and the intermediate array are not
wrapped in Zeroizing, leaving copies on the stack; change the decode and array
creation so both are zeroized: replace the hex::decode(...) -> decoded Vec with
let decoded = Zeroizing::new(hex::decode(sk)...) (use the same error mapping),
check decoded.len(), then create the array as let mut arr = Zeroizing::new([0u8;
32]); arr.copy_from_slice(&decoded); and return Some(arr); thereby ensuring both
the decoded bytes and the [u8; 32] intermediate are zeroized; references:
secret_bytes, secret_key, Zeroizing::new, hex::decode.
In `@keep-frost-net/src/node/descriptor.rs`:
- Around line 789-838: The code currently emits KfpNodeEvent::DescriptorComplete
whenever is_complete is true, which allows replayed ACKs to retrigger
completion; change the emission to only fire when the ACK caused completion by
checking both is_new and is_complete (i.e., require is_new && is_complete)
before reading descriptor_sessions and sending KfpNodeEvent::DescriptorComplete;
keep the existing use of add_ack, descriptor_sessions, event_tx, and
KfpNodeEvent::DescriptorAcked/DescriptorComplete but gate the DescriptorComplete
branch on is_new && is_complete.
- Around line 633-644: The current flow in handle_descriptor_finalize calls
session.set_finalized(...) (which requires state ==
DescriptorSessionState::Proposed) before self.client.send_event(...), so a
transport retry will re-run and fail because set_finalized is not idempotent;
fix by making set_finalized idempotent: change DescriptorSession::set_finalized
to accept repeated calls when self.state == Finalized if the provided
FinalizedDescriptor (external/internal/policy_hash) matches the already-stored
values and return Ok(()) instead of error, preserving existing error behavior
for conflicting descriptors; alternatively (if you prefer the other approach)
move the self.client.send_event(&event).await call to before
session.set_finalized so the persisted state transition happens only after a
successful send, ensuring retries can re-send the event without a state
transition conflict.
---
Nitpick comments:
In `@keep-agent/src/client.rs`:
- Line 193: The call to client.switch_relays().await currently discards any
error; change it to capture the Result from the async call and log failures
(e.g., via your crate's logger or the existing logging facility) including the
error details and context, while preserving the current fallback behavior to the
original relay; specifically update the invocation of switch_relays() in
keep-agent/src/client.rs so the returned Result is matched/handled and failures
are logged (with error/warn level and the error message) instead of being
ignored.
- Around line 439-448: After calling self.client.disconnect() and
self.client.remove_all_relays(), track whether any
self.client.add_relay(relay).await calls actually succeed (e.g., count successes
or collect successful relays) instead of blindly assuming valid_relays were
added; only call self.client.connect(), set self.relay_url =
valid_relays[0].clone(), and return Ok(Some(...)) when at least one add_relay
succeeded. If zero relays were added, revert to a safe state (for example:
re-add previous relays, reconnect to them, or return Ok(None)/an Err) so the
client is not left disconnected; update the logic around
add_relay/connect/relay_url to use the successful-relays collection.
In `@keep-desktop/src/frost.rs`:
- Around line 947-958: The code calls active_coordinations.clear() twice in the
same control path; remove the duplicate call (keep only one clear call) to
improve readability—locate the repeated active_coordinations.clear() invocations
in the same block (surrounding frost_reconnect_attempts, frost_reconnect_at,
frost_node.lock(), and pending_sign_requests.lock()) and delete the second
occurrence so only a single active_coordinations.clear() remains.
In `@keep-frost-net/src/descriptor_session.rs`:
- Around line 228-234: Add explicit assertions around the new Result<bool>
semantics of add_ack: call DescriptorSession::add_ack (or the add_ack method on
the session struct) and assert that the first successful ACK returns Ok(true), a
repeated ACK with the same share_index/descriptor_hash returns Ok(false), and
that the session's completion state (e.g., is_complete or completed flag) does
not change after the duplicate ACK. Update the existing tests that unwrap
results at the earlier locations (around the add_ack usages referenced) to
assert the boolean return value and verify the unchanged completion state on
duplicate calls.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
keep-agent-py/Cargo.tomlkeep-agent-py/src/lib.rskeep-agent-ts/Cargo.tomlkeep-agent-ts/src/lib.rskeep-agent/src/client.rskeep-desktop/src/app.rskeep-desktop/src/frost.rskeep-desktop/src/message.rskeep-desktop/src/screen/wallet.rskeep-frost-net/src/descriptor_session.rskeep-frost-net/src/node/descriptor.rskeep-frost-net/src/node/mod.rskeep-nip46/src/audit.rskeep-nip46/src/handler.rs
There was a problem hiding this comment.
🧹 Nitpick comments (1)
keep-agent/src/client.rs (1)
184-191: Silently ignoringswitch_relays()errors is acceptable but consider logging.The
let _ = client.switch_relays().await;pattern intentionally ignores relay-switching failures, which is reasonable since the session is already approved. However, logging at debug/trace level could help with troubleshooting.💡 Optional: Add trace logging for relay switch failures
- let _ = client.switch_relays().await; + if let Err(e) = client.switch_relays().await { + tracing::trace!("Relay switch after approval failed: {e}"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-agent/src/client.rs` around lines 184 - 191, Replace the silent ignore of switch_relays() with a logged result: where AgentClient is constructed and switch_relays().await is currently called as `let _ = client.switch_relays().await;`, capture the Result and if Err emit a trace/debug log including the error (using the project logger/tracing macros) and a short contextual message that includes that this occurred during post-approval session creation; keep the flow so the function still returns Ok(client) on failure. This touches the AgentClient construction and the switch_relays() call in client.rs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@keep-agent/src/client.rs`:
- Around line 184-191: Replace the silent ignore of switch_relays() with a
logged result: where AgentClient is constructed and switch_relays().await is
currently called as `let _ = client.switch_relays().await;`, capture the Result
and if Err emit a trace/debug log including the error (using the project
logger/tracing macros) and a short contextual message that includes that this
occurred during post-approval session creation; keep the flow so the function
still returns Ok(client) on failure. This touches the AgentClient construction
and the switch_relays() call in client.rs.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
keep-agent-py/src/lib.rskeep-agent-ts/src/lib.rskeep-agent/src/client.rskeep-desktop/src/frost.rskeep-frost-net/src/node/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- keep-desktop/src/frost.rs
There was a problem hiding this comment.
🧹 Nitpick comments (2)
keep-desktop/src/frost.rs (1)
946-957: Remove duplicateactive_coordinations.clear()call.Line 946 already clears
active_coordinations. The second call at Line 957 is redundant.♻️ Proposed fix
self.active_coordinations.clear(); self.frost_reconnect_attempts = 0; self.frost_reconnect_at = None; if let Ok(mut guard) = self.frost_node.lock() { *guard = None; } if let Ok(mut guard) = self.pending_sign_requests.lock() { for entry in guard.drain(..) { let _ = entry.response_tx.try_send(false); } } - self.active_coordinations.clear(); if let Some(s) = self.relay_screen_mut() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-desktop/src/frost.rs` around lines 946 - 957, Remove the redundant call to self.active_coordinations.clear(): the block already calls self.active_coordinations.clear() at the start, so delete the second invocation (the later self.active_coordinations.clear() statement) to avoid duplicate work; locate and remove the duplicate clear within the same function containing frost_reconnect_attempts/frost_reconnect_at and the frost_node/pending_sign_requests guards.keep-desktop/src/app.rs (1)
1309-1345: Consider storingexpected_participantsfor UI progress tracking.The
expected_participantsvalue is extracted but unused (Line 1311). If you intend to show "X of Y participants" progress in the UI, you may want to store this inActiveCoordinationorSetupState. If this is intentional for future use, a brief comment would clarify intent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-desktop/src/app.rs` around lines 1309 - 1345, The handler for Message::WalletSessionStarted is currently ignoring the extracted _expected_participants; add storage for that value so the UI can show progress: extend ActiveCoordination (or the WalletScreen Setup state type used in WalletScreen { setup: Some(s), .. }) with an expected_participants: usize (or Option<usize>) field, and when creating/inserting the ActiveCoordination in the success branch set that field from expected_participants; also set the corresponding field on the Setup state (e.g., s.expected_participants = Some(expected_participants)) when updating s.session_id so the UI can read "X of Y" progress. Ensure any new field is optional if not always available and update constructors/uses of ActiveCoordination accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@keep-desktop/src/app.rs`:
- Around line 1309-1345: The handler for Message::WalletSessionStarted is
currently ignoring the extracted _expected_participants; add storage for that
value so the UI can show progress: extend ActiveCoordination (or the
WalletScreen Setup state type used in WalletScreen { setup: Some(s), .. }) with
an expected_participants: usize (or Option<usize>) field, and when
creating/inserting the ActiveCoordination in the success branch set that field
from expected_participants; also set the corresponding field on the Setup state
(e.g., s.expected_participants = Some(expected_participants)) when updating
s.session_id so the UI can read "X of Y" progress. Ensure any new field is
optional if not always available and update constructors/uses of
ActiveCoordination accordingly.
In `@keep-desktop/src/frost.rs`:
- Around line 946-957: Remove the redundant call to
self.active_coordinations.clear(): the block already calls
self.active_coordinations.clear() at the start, so delete the second invocation
(the later self.active_coordinations.clear() statement) to avoid duplicate work;
locate and remove the duplicate clear within the same function containing
frost_reconnect_attempts/frost_reconnect_at and the
frost_node/pending_sign_requests guards.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
keep-desktop/src/app.rskeep-desktop/src/frost.rskeep-desktop/src/message.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- keep-desktop/src/message.rs
- Derive node identity key from signing share secret, not just public data - Add verify_peer_share_index to handle_descriptor_propose - Cap seen_xpub_announces at 10k entries with eviction - Validate network parameter in request_descriptor - Zeroize intermediate hash in derive_keys_from_share - Return bool from add_ack to prevent duplicate event emission - Only increment acks_received for initiator nodes
- Validate relay URLs before adding in switch_relays - Add require_approval to nip44/nip04 encrypt handlers - Use constant-time comparison for descriptor hash - Wrap secret keys in Zeroizing in Python/TypeScript bindings - Zeroize secret key copies after use in sign methods - Correlate response ID in send_request - Remove double-sleep in wait_for_approval - Send connect request only once across poll iterations - Finalize descriptor locally before sending ACK - Use hex instead of bech32 in approval_url - Compute expected_acks from all contributors, not just online peers
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
keep-frost-net/src/node/descriptor.rs (1)
824-873:⚠️ Potential issue | 🟡 MinorPrevent duplicate
DescriptorCompleteemissions on replayed ACKs.Line 861 currently emits completion whenever
is_completeis true, even when Line 830 indicatesis_new == false. That allows duplicate completion events from repeated ACKs.Suggested fix
- if is_complete { + if is_new && is_complete { let sessions = self.descriptor_sessions.read(); if let Some(session) = sessions.get_session(&payload.session_id) { if let Some(desc) = session.descriptor() { let _ = self.event_tx.send(KfpNodeEvent::DescriptorComplete { session_id: payload.session_id, external_descriptor: desc.external.clone(), internal_descriptor: desc.internal.clone(), network: session.network().to_string(), }); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-frost-net/src/node/descriptor.rs` around lines 824 - 873, The code sends DescriptorComplete on any ACK with is_complete true, causing duplicate events for replayed ACKs; change the emission so completion is only sent when the session actually transitioned to complete on this ACK—i.e., require both is_new and is_complete (or move the DescriptorComplete send into the is_new branch) after calling session.add_ack in descriptor_sessions; use the existing symbols add_ack, is_new, is_complete, event_tx.send(KfpNodeEvent::DescriptorComplete { ... }), and session.descriptor() to locate and update the logic.
🧹 Nitpick comments (1)
keep-agent/src/manager.rs (1)
39-43: Consider invokingcleanup_expiredbefore rejecting new sessions.The guard correctly prevents exceeding
MAX_SESSIONS, but if the limit is reached due to accumulated expired sessions, legitimate requests will be rejected. Consider callingcleanup_expired(or a similar inline cleanup) before the capacity check to reclaim slots from expired sessions.💡 Proposed improvement
let mut sessions = self .sessions .write() .map_err(|_| AgentError::Other("Failed to acquire session lock".into()))?; + // Reclaim expired session slots before checking capacity + sessions.retain(|_, s| !s.is_expired()); + if sessions.len() >= MAX_SESSIONS { return Err(AgentError::Other(format!( "Maximum session count ({MAX_SESSIONS}) reached" ))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-agent/src/manager.rs` around lines 39 - 43, Before checking capacity, invoke cleanup_expired to reclaim slots from expired sessions so expired entries don't cause a false MAX_SESSIONS rejection; specifically, call the cleanup_expired (or the inline expiration-cleanup routine) prior to the sessions.len() >= MAX_SESSIONS check in the function that performs session creation, ensuring the MAX_SESSIONS guard operates on a pruned sessions collection (references: cleanup_expired, MAX_SESSIONS, sessions.len()).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@keep-frost-net/src/node/descriptor.rs`:
- Around line 824-873: The code sends DescriptorComplete on any ACK with
is_complete true, causing duplicate events for replayed ACKs; change the
emission so completion is only sent when the session actually transitioned to
complete on this ACK—i.e., require both is_new and is_complete (or move the
DescriptorComplete send into the is_new branch) after calling session.add_ack in
descriptor_sessions; use the existing symbols add_ack, is_new, is_complete,
event_tx.send(KfpNodeEvent::DescriptorComplete { ... }), and
session.descriptor() to locate and update the logic.
---
Nitpick comments:
In `@keep-agent/src/manager.rs`:
- Around line 39-43: Before checking capacity, invoke cleanup_expired to reclaim
slots from expired sessions so expired entries don't cause a false MAX_SESSIONS
rejection; specifically, call the cleanup_expired (or the inline
expiration-cleanup routine) prior to the sessions.len() >= MAX_SESSIONS check in
the function that performs session creation, ensuring the MAX_SESSIONS guard
operates on a pruned sessions collection (references: cleanup_expired,
MAX_SESSIONS, sessions.len()).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
keep-agent-py/Cargo.tomlkeep-agent-py/src/lib.rskeep-agent-ts/Cargo.tomlkeep-agent-ts/src/lib.rskeep-agent/src/client.rskeep-agent/src/manager.rskeep-agent/src/mcp/server.rskeep-desktop/src/app.rskeep-desktop/src/frost.rskeep-desktop/src/message.rskeep-desktop/src/screen/wallet.rskeep-frost-net/src/descriptor_session.rskeep-frost-net/src/node/descriptor.rskeep-nip46/Cargo.tomlkeep-nip46/src/audit.rskeep-nip46/src/handler.rskeep-nip46/src/server.rs
✅ Files skipped from review due to trivial changes (1)
- keep-agent-ts/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (3)
- keep-nip46/src/audit.rs
- keep-agent-py/Cargo.toml
- keep-desktop/src/message.rs
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
keep-nip46/tests/bunker_integration.rs (1)
169-180:⚠️ Potential issue | 🟡 MinorMissing assertion on request_count.
The polling loop exits without asserting that
request_count > 0. If the relay never delivers responses, the test silently passes withrequest_count = 0. Consider adding an assertion after the loop:Proposed fix
} + assert!( + request_count > 0, + "expected at least one request to be recorded via relay" + ); + // Cleanup server_handle.abort();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-nip46/tests/bunker_integration.rs` around lines 169 - 180, The polling loop sets request_count from server_handler.list_clients() but never fails the test if it remains zero; after the for-loop in bunker_integration.rs add an assertion (e.g., assert!(request_count > 0, "relay didn't deliver responses: request_count == 0") or assert_ne!(request_count, 0)) to fail the test when no responses were delivered, ensuring the behavior checked by the loop is actually enforced.
🧹 Nitpick comments (2)
keep-nip46/tests/bunker_integration.rs (1)
76-83: Consider event-based synchronization for determinism.Fixed sleeps (even reduced ones) can cause flaky tests on slow CI runners. Consider polling for specific conditions or using channels/events to signal readiness instead of relying on timing.
That said, the polling loop at lines 169-180 provides good resilience for the final assertion, and MockRelay should be faster than real relays.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-nip46/tests/bunker_integration.rs` around lines 76 - 83, Replace the fixed tokio::time::sleep calls with event-based synchronization: after creating Keys::generate and Client::new and calling client.add_relay(relay_url).await.unwrap() and client.connect().await, poll or await a readiness signal (e.g., Client::is_connected(), a connection_ready() future, or a oneshot/mpsc notification emitted by MockRelay) instead of sleeping; implement/consume a short loop with timeout that checks the client/relay ready condition (or subscribe to MockRelay's ready channel) so tests become deterministic and do not rely on Duration::from_secs sleeps.keep-frost-net/src/node/descriptor.rs (1)
937-945: Use saturating arithmetic for eviction window math.At Line 940,
self.replay_window_secs + super::MAX_FUTURE_SKEW_SECScan overflow on extreme config values. Prefersaturating_addto keep pruning deterministic.Suggested diff
- let window = self.replay_window_secs + super::MAX_FUTURE_SKEW_SECS; + let window = self + .replay_window_secs + .saturating_add(super::MAX_FUTURE_SKEW_SECS);You can confirm whether overflow is currently impossible by checking bounds on
replay_window_secs:#!/bin/bash set -euo pipefail # Find where replay_window_secs is defined and validated rg -n -C4 '\breplay_window_secs\b' --type rust # Inspect skew constant definition/type/value rg -n -C4 '\bMAX_FUTURE_SKEW_SECS\b' --type rustExpected: an explicit upper bound that proves no overflow, or adoption of
saturating_add.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@keep-frost-net/src/node/descriptor.rs` around lines 937 - 945, The eviction window calculation can overflow when computing self.replay_window_secs + super::MAX_FUTURE_SKEW_SECS; update the addition to use saturating_add (or otherwise bound/validate replay_window_secs) so the window uses deterministic saturating arithmetic; locate the block around MAX_SEEN_XPUB_ANNOUNCES and change the computation of window (used in seen.retain(|&(_, ts, _)| now.saturating_sub(window) <= ts)) to use self.replay_window_secs.saturating_add(super::MAX_FUTURE_SKEW_SECS) so pruning cannot overflow before the retain/clear/dedup_key logic runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@keep-frost-net/src/node/descriptor.rs`:
- Around line 675-679: The code finalizes the session
(session.set_finalized(...)) before sending the network event
(client.send_event), which makes transient send failures non-retryable; change
the flow so that the session is only marked Finalized after
client.send_event(&event).await succeeds, or if you prefer to keep the
commit-first approach then re-acquire the write lock and revert the session
state if client.send_event returns an error; update the logic around
session.set_finalized and client.send_event to ensure idempotent retries
(reference session.set_finalized, client.send_event, and the error mapping to
FrostNetError).
---
Outside diff comments:
In `@keep-nip46/tests/bunker_integration.rs`:
- Around line 169-180: The polling loop sets request_count from
server_handler.list_clients() but never fails the test if it remains zero; after
the for-loop in bunker_integration.rs add an assertion (e.g.,
assert!(request_count > 0, "relay didn't deliver responses: request_count == 0")
or assert_ne!(request_count, 0)) to fail the test when no responses were
delivered, ensuring the behavior checked by the loop is actually enforced.
---
Nitpick comments:
In `@keep-frost-net/src/node/descriptor.rs`:
- Around line 937-945: The eviction window calculation can overflow when
computing self.replay_window_secs + super::MAX_FUTURE_SKEW_SECS; update the
addition to use saturating_add (or otherwise bound/validate replay_window_secs)
so the window uses deterministic saturating arithmetic; locate the block around
MAX_SEEN_XPUB_ANNOUNCES and change the computation of window (used in
seen.retain(|&(_, ts, _)| now.saturating_sub(window) <= ts)) to use
self.replay_window_secs.saturating_add(super::MAX_FUTURE_SKEW_SECS) so pruning
cannot overflow before the retain/clear/dedup_key logic runs.
In `@keep-nip46/tests/bunker_integration.rs`:
- Around line 76-83: Replace the fixed tokio::time::sleep calls with event-based
synchronization: after creating Keys::generate and Client::new and calling
client.add_relay(relay_url).await.unwrap() and client.connect().await, poll or
await a readiness signal (e.g., Client::is_connected(), a connection_ready()
future, or a oneshot/mpsc notification emitted by MockRelay) instead of
sleeping; implement/consume a short loop with timeout that checks the
client/relay ready condition (or subscribe to MockRelay's ready channel) so
tests become deterministic and do not rely on Duration::from_secs sleeps.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
keep-agent/src/manager.rskeep-frost-net/src/node/descriptor.rskeep-nip46/Cargo.tomlkeep-nip46/tests/bunker_integration.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- keep-agent/src/manager.rs
- keep-nip46/Cargo.toml
Summary
Test plan
Summary by CodeRabbit
New Features
Security & Improvements
Bug Fixes
Tests