Conversation
|
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
Hi @yogesh0509 Ready to close it out. Just updated the cairo-lib, it was out of date, causing the compiler to throw errors sometimes. |
sure, I am out right now and can take a look at it on Wednesday. Is that fine? |
|
Sounds good |
|
Hello @yogesh0509 how are you doing? please what's the update? |
01b179f to
0764fc7
Compare
|
@JoE11-y , I have updated the PR. Can you check if requirement has been completed? |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (11)
contracts/L2/tests/test_ZeroXBridgeL2.cairo (6)
95-160: Potential flakiness: commitment hash uses dynamic block timestamp.You compute
expected_hashwithget_block_timestamp()after the burn. If block time advances between the burn and the computation, hashes will diverge and the assertion will fail. Either freeze the timestamp or derive the commitment_hash directly from the emitted BurnEvent.Apply one of the following:
- Freeze timestamp:
- let expected_hash = PoseidonTrait::new().update_with(data_to_hash).finalize(); + // Freeze timestamp to avoid drift + let ts = get_block_timestamp().into(); + let data_to_hash = BurnData { caller: alice_addr.try_into().unwrap(), amount: burn_amount_usd, nonce: 0, time_stamp: ts }; + let expected_hash = PoseidonTrait::new().update_with(data_to_hash).finalize();
- Or, assert partially (user, amount, nonce) and skip recomputing the hash exactly, or fetch the commitment hash from the event payload and compare it for equality with a locally recomputed value if you set/fix timestamp first.
162-209: Test name and assertion read well; minor message nit.
assert(initial_balance - final_balance == burn_amount, 'Token balance not reduced');works, but if it fails the message reads as if it passed. Consider a stricter message like "final balance should be initial - burn_amount".
210-239: Coverage is good; consider also negative allowance and zero-amount burns.Insufficient balance is covered. Two useful edge cases:
- zero-amount burn should revert (contract asserts > 0),
- allowance insufficient even when balance is sufficient.
I can add those as extra tests if you want.
509-571: Great incremental checks; but the “expected indices” comment likely mismatches MMR indexing.The comment says: “leaf 0→0, leaf 1→1, leaf 2→2, leaf 3→4, leaf 4→5…”. Common MMR conventions place leaves at odd positions (1,3,5,7,…) and total-node “last_pos” at
2n - popcount(n). Your comment looks inconsistent with either. Since the contract now records a computed index per commitment, please assert the actual per-leaf index returned byget_commitment_indexfor a few burns and update the comment to match the library’s ground truth.Example patch to assert indices for the first 5 burns (if leaves are at 1,3,5,7,9):
@@ - // Expected MMR indices: leaf 0 -> 0, leaf 1 -> 1, leaf 2 -> 2, leaf 3 -> 4, leaf 4 -> 5, etc. + // Expected MMR indices (verify with the library; typical scheme: leaves at odd positions 1,3,5,7,9,...) @@ let leaves_count_1 = merkle_manager.get_leaves_count(); assert(leaves_count_1 == 1, 'Expected 1 leaf'); + let idx1 = merkle_manager.get_commitment_index( + PoseidonTrait::new().update_with(BurnData { caller: alice_addr.try_into().unwrap(), amount: burn_amount_usd, nonce: 0, time_stamp: get_block_timestamp().into() }).finalize() + ); + assert(idx1 == 1, 'Unexpected index for leaf 1');If recomputing the hash is noisy, retrieve it from the BurnEvent via
spy_events.
573-628: Edge cases covered; add commitment index checks along power-of-two boundaries.Since MMR shapes change at powers of two, also assert the stored commitment indices at i = 1, 2, 3, 4, 8. This will catch off-by-one in your index computation logic.
1-1: Scarb formatting failure.CI reports
scarb fmt --checkfailures for this test file. Please run:scarb fmt --writecontracts/L2/src/core/ZeroXBridgeL2.cairo (5)
403-409: Filtering zero peaks is fine; consider storing filtered peaks in storage.You filter zeros at read time. If the MMR library returns fixed-size buffers with trailing zeros, consider storing only non-zero peaks in
last_peaksto reduce storage writes/reads and avoid future callers bypassingget_last_peaks.Possible tweak in
append_withdrawal_hash:- for i in 0..peaks.len() { - self.last_peaks.push(*peaks.at(i)); - } + for i in 0..peaks.len() { + let p = *peaks.at(i); + if p != 0 { self.last_peaks.push(p); } + }
436-467: Double-check what index you store: leaf position vs “last_pos”.
- You write
node_index_to_rootkeyed bymmr.last_posafter append. After merges,last_posrepresents the last node in the structure (total nodes so far), not the appended leaf’s position.- Separately, you compute
correct_leaf_indexand store it undercommitment_hash_to_index.If downstream proof verification expects “leaf position” as the index,
commitment_hash_to_indexmust hold the leaf’s MMR position, notlast_pos. Please confirm which notion of “index” yourMMRTrait::verify_proofexpects; if it expects leaf position, then any consumer usingnode_index_to_rootwithlast_posmight be inconsistent.If you intend to preserve a map from leaf position to root as of append time, consider:
- self.node_index_to_root.write(mmr.last_pos, root_hash); + self.node_index_to_root.write(correct_leaf_index, root_hash);And add a comment clarifying the semantics.
448-453: Peaks storage clear/update is fine; minor micro-optimization possible.
clear_peaks_storagepops one-by-one; harmless given tiny peak counts. If the storage Vec supportstruncate(0)orclear(), it would cut syscalls; otherwise keep as-is.
244-286: Message nit: “Burn amount less than zero”.The assertion is
burn_amount_usd > 0, so a clearer message is “Burn amount must be greater than zero”.- assert(burn_amount_usd > 0, 'Burn amount less than zero'); + assert(burn_amount_usd > 0, 'Burn amount must be greater than zero');
1-1: Scarb formatting failure.CI reports
scarb fmt --checkfailures for this source file. Please run:scarb fmt --write
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
contracts/L2/package-lock.jsonis excluded by!**/package-lock.jsoncontracts/L2/yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (3)
contracts/L2/package.json(0 hunks)contracts/L2/src/core/ZeroXBridgeL2.cairo(3 hunks)contracts/L2/tests/test_ZeroXBridgeL2.cairo(2 hunks)
💤 Files with no reviewable changes (1)
- contracts/L2/package.json
🧰 Additional context used
🪛 GitHub Actions: L2 contracts ci
contracts/L2/tests/test_ZeroXBridgeL2.cairo
[error] 1-1: scarb fmt --check failed for Cairo test file. Formatting issues detected. Run 'scarb fmt --write' to fix.
contracts/L2/src/core/ZeroXBridgeL2.cairo
[error] 1-1: scarb fmt --check failed for Cairo source. Formatting issues detected (e.g., whitespace, newline at end). Run 'scarb fmt --write' to fix.
🔇 Additional comments (5)
contracts/L2/tests/test_ZeroXBridgeL2.cairo (1)
13-13: Good addition: importing IMerkleManager dispatcher/trait to introspect MMR state.This enables the tests to query leaves_count, root_hash, and element_count from the bridge. No concerns here.
contracts/L2/src/core/ZeroXBridgeL2.cairo (4)
176-176: Good: initialize leaves_count in constructor.Explicitly setting
leaves_countto 0 avoids undefined state during the first append.
425-426: Switch to trait-based verify_proof looks good.Passing
peaks.span()andproof.span()from the caller aligns with trait expectations and makes the method stateless. No issues spotted.
383-427: Expose peaks cleanly via IMerkleManager; good separation.The IMerkleManager implementation encapsulates MMR internals and surfaces root, element_count, peaks, and leaves_count. This aligns with the new tests and keeps mint/burn paths clean.
1-501: Manual Verification Needed: Alignleaf_count_to_mmr_indexwith Upstream MMR SemanticsI wasn’t able to locate the
MMRtrait or itsappend/verify_proofimplementations in our local codebase—the Cairo library (cairo_lib) is pulled in as an external dependency. To prevent subtle off-by-one errors or miscounted internal nodes in our Merkle Mountain Range, please manually confirm that ourleaf_count_to_mmr_indexlogic exactly matches the conventions incairo_lib/data_structures/mmr/mmr.cairo. Specifically:
- Review the upstream
MMRstruct’slast_possemantics and howappendcomputes the new root and position.- Ensure
MMRTrait::append’s internal-node counting aligns with our loop inleaf_count_to_mmr_index.- Verify that
verify_proofexpects the same indexing (leaf position + internal nodes) that our bridge uses.- Update any test expectations in
contracts/L2/src/mocks/MockMerkleManager.cairoandcontracts/L2/tests/test_MerkleManager.cairoif the library’s index formula differs.Without this manual check against the external library, merging risks misaligned proofs or incorrect root lookups.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
contracts/L2/src/core/ZeroXBridgeL2.cairo (1)
486-506: MMR index helper computes last_node position, not leaf position (repeat)This repeats an earlier concern: the current math sums internal nodes and returns 2n - popcount(n), i.e., the “last node” position after n inserts—not the nth leaf’s position. If verify_proof expects leaf positions, this will mislabel indices at every merge boundary.
Proposed fix (odd leaf positions 1,3,5,…):
- fn leaf_count_to_mmr_index(leaf_count: felt252) -> felt252 { - if leaf_count == 0 { - return 0; - } - // For MMR, the first leaf is at index 1, second at index 2, etc. - // But we need to account for internal nodes - let mut internal_nodes: u64 = 0; - let mut temp: u64 = leaf_count.try_into().unwrap(); - - // Count internal nodes created by building the MMR - let mut level = 1_u64; - while level < temp { - internal_nodes += temp / (level * 2); - level *= 2; - } - - // The MMR index is leaf position + internal nodes before it - let mmr_index = leaf_count + internal_nodes.into(); - mmr_index - } + // Returns the MMR position for the given leaf number using the common convention: + // leaf #1 => 1, leaf #2 => 3, leaf #3 => 5, ... + fn leaf_count_to_mmr_index(leaf_count: felt252) -> felt252 { + if leaf_count == 0 { return 0; } + let n: u64 = leaf_count.try_into().unwrap(); + let pos: u64 = 2 * n - 1; + pos.into() + }If you go with “Option A” in append_withdrawal_hash (deriving from prev_last_pos), consider removing this helper entirely to avoid confusion.
I can open a follow-up to add property-based tests asserting index mapping for n = 1..1024 under your chosen convention.
🧹 Nitpick comments (1)
contracts/L2/src/core/ZeroXBridgeL2.cairo (1)
403-409: get_last_peaks filters zero entries; confirm zero can’t be a valid peakFiltering out 0 cleans up legacy/placeholder state, but if Poseidon can theoretically output 0, this could drop a legitimate peak. If the MMR library guarantees non-zero node hashes, keep this. Otherwise, consider ensuring zeros are never written in the first place (your append path already repopulates from real peaks).
Would you confirm whether cairo_lib MMR node/peak hashes are guaranteed non-zero?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
contracts/L2/src/core/ZeroXBridgeL2.cairo(3 hunks)contracts/L2/tests/test_ZeroXBridgeL2.cairo(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- contracts/L2/tests/test_ZeroXBridgeL2.cairo
🧰 Additional context used
🪛 GitHub Actions: L2 contracts ci
contracts/L2/src/core/ZeroXBridgeL2.cairo
[warning] 482-482: Unhandled #[must_use] type core::option::Option<?8>.
🔇 Additional comments (3)
contracts/L2/src/core/ZeroXBridgeL2.cairo (3)
176-176: Good: initialize leaves_count in constructorExplicitly setting leaves_count to 0 avoids relying on storage defaults and helps with upgrade safety.
425-426: Switch to trait-based verification is a winCalling MMRTrait::verify_proof(@mmr, …) is cleaner and avoids method-resolution pitfalls. No issues spotted here.
433-473: The section containingleaf_count_to_mmr_indexwill be printed for review. Once we see its full implementation, we can confirm whether it indeed computes2n - popcount(n)(i.e., the total node count for n leaves) and validate if adding+1is semantically equivalent to usingmmr.last_pos + 1. Please share the output so we can conclude the verification.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores