-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(rollup sync service): use CalldataBlobSource
to retrieve data from L1
#1103
base: develop
Are you sure you want to change the base?
Conversation
…retrieve data from L1
WalkthroughThe pull request introduces comprehensive modifications to the rollup synchronization service and data availability (DA) handling in the Scroll tech ecosystem. The changes primarily focus on enhancing event processing, improving blob hash handling, and refactoring the rollup sync service to support more flexible data retrieval and validation. Key modifications include updating interfaces, restructuring event handling, and introducing new methods for managing rollup-related events and metadata. Changes
Sequence DiagramsequenceDiagram
participant Client
participant RollupSyncService
participant DASource
participant L1Client
Client->>RollupSyncService: Initialize with DA Config
RollupSyncService->>DASource: Create CallDataBlobSource
RollupSyncService->>L1Client: Fetch Rollup Events
L1Client-->>RollupSyncService: Return Events
RollupSyncService->>DASource: Process Data Entries
DASource-->>RollupSyncService: Return Processed Entries
RollupSyncService->>RollupSyncService: Validate Batches
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
6e06325
to
da81a2e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice work and LGTM.
return &rawdb.CommittedBatchMeta{ | ||
Version: uint8(commitedBatch.Version()), | ||
ChunkBlockRanges: chunkRanges, | ||
BlobVersionedHashes: commitedBatch.BlobVersionedHashes(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this field is dangling (used in previous versions but not used after da-codec refactoring), should (and could) we remove it as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does removing it cause some problems for nodes that upgrade? And want to decode older data from DB?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right. some upgrade code would be added. what about keeping it as it is? (or add a comment in CommittedBatchMeta
that this field is currently not used).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 2499c69
…-use-code-from-l1-follower
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (7)
rollup/l1/types.go (1)
24-60
: Add blob-specific testing capabilities to support the refactoring.Since this PR refactors the rollup sync service to use
CalldataBlobSource
, the mock client should include capabilities for testing blob-related scenarios.Consider adding:
- Methods to simulate blob data retrieval
- Configuration options for different blob formats
- Test helpers for verifying blob-related behavior
Example additions:
type MockNopClient struct { // ... existing fields ... blobData []byte blobVersion uint } // SetBlobData configures mock blob data func (m *MockNopClient) SetBlobData(data []byte, version uint) { m.blobData = data m.blobVersion = version } // Add test helper methods for blob verification func (m *MockNopClient) VerifyBlobAccess() bool { return m.blobData != nil }rollup/rollup_sync_service/rollup_sync_service.go (1)
357-358
: Fix typo in parameter namecommitedBatch
The parameter name
commitedBatch
in thegetCommittedBatchMeta
function has a typo. It should becommittedBatch
to maintain consistency and avoid confusion.Apply this diff to correct the typo:
-func (s *RollupSyncService) getCommittedBatchMeta(commitedBatch da.EntryWithBlocks) (*rawdb.CommittedBatchMeta, error) { +func (s *RollupSyncService) getCommittedBatchMeta(committedBatch da.EntryWithBlocks) (*rawdb.CommittedBatchMeta, error) {rollup/rollup_sync_service/rollup_sync_service_test.go (1)
164-182
: Avoid panics in unimplemented mock methodsIn the
mockEntryWithBlocks
struct, the unimplemented methods panic with "implement me". To improve test safety and prevent accidental panics, consider returning default zero values or implementing minimal functionality.For example, modify the
Type
method to return a default value:-func (m mockEntryWithBlocks) Type() da.Type { - panic("implement me") +func (m mockEntryWithBlocks) Type() da.Type { + return da.UnknownType }Apply similar changes to other methods as needed.
rollup/da_syncer/da/calldata_blob_source.go (1)
113-128
: Reduce code duplication in type assertions.The type assertion pattern is repeated for each event type. Consider extracting it into a helper function to improve maintainability.
+func assertEventType[T l1.RollupEvent](event l1.RollupEvent) (T, error) { + typed, ok := event.(T) + if !ok { + return nil, fmt.Errorf("unexpected type of rollup event: %T", event) + } + return typed, nil +} case l1.RevertEventType: - revertEvent, ok := rollupEvent.(*l1.RevertBatchEvent) - if !ok { - return nil, fmt.Errorf("unexpected type of rollup event: %T", rollupEvent) - } + revertEvent, err := assertEventType[*l1.RevertBatchEvent](rollupEvent) + if err != nil { + return nil, err + } entry = NewRevertBatch(revertEvent) case l1.FinalizeEventType: - finalizeEvent, ok := rollupEvent.(*l1.FinalizeBatchEvent) - if !ok { - return nil, fmt.Errorf("unexpected type of rollup event: %T", rollupEvent) - } + finalizeEvent, err := assertEventType[*l1.FinalizeBatchEvent](rollupEvent) + if err != nil { + return nil, err + } entry = NewFinalizeBatch(finalizeEvent)rollup/da_syncer/da/commitV0.go (1)
74-84
: Document BlobVersionedHashes implementation.Please add a comment explaining why
BlobVersionedHashes
returns nil for V0 commits. This helps clarify the expected behavior for this version.+// BlobVersionedHashes returns nil as V0 commits do not support blob data. func (c *CommitBatchDAV0) BlobVersionedHashes() []common.Hash { return nil }
core/rawdb/accessors_rollup_event.go (1)
22-22
: Consider documenting the versioned hash format.While the comment indicates this field is unused and kept for compatibility, it would be helpful to document the expected format of the versioned hashes for future reference.
rollup/da_syncer/da/commitV1.go (1)
38-48
: Consider enhancing error handling for versioned hashes validation.While the validation is correct, consider providing more context in the error message about the expected behavior.
- return nil, fmt.Errorf("unexpected number of versioned hashes: %d", len(versionedHashes)) + return nil, fmt.Errorf("expected exactly one versioned hash for batch %d, but got %d", commitEvent.BatchIndex().Uint64(), len(versionedHashes))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
cmd/utils/flags.go
(1 hunks)core/rawdb/accessors_rollup_event.go
(1 hunks)eth/backend.go
(1 hunks)rollup/da_syncer/da/calldata_blob_source.go
(2 hunks)rollup/da_syncer/da/commitV0.go
(5 hunks)rollup/da_syncer/da/commitV1.go
(3 hunks)rollup/da_syncer/da/da.go
(2 hunks)rollup/da_syncer/da/finalize.go
(2 hunks)rollup/da_syncer/da/revert.go
(2 hunks)rollup/l1/abi.go
(1 hunks)rollup/l1/reader.go
(1 hunks)rollup/l1/types.go
(1 hunks)rollup/rollup_sync_service/abi.go
(0 hunks)rollup/rollup_sync_service/abi_test.go
(0 hunks)rollup/rollup_sync_service/l1client.go
(0 hunks)rollup/rollup_sync_service/l1client_test.go
(0 hunks)rollup/rollup_sync_service/rollup_sync_service.go
(10 hunks)rollup/rollup_sync_service/rollup_sync_service_test.go
(9 hunks)rollup/rollup_sync_service/testdata/commitBatchWithBlobProof_input_codecv3.json
(0 hunks)rollup/rollup_sync_service/testdata/commitBatch_input_codecv1.json
(0 hunks)rollup/rollup_sync_service/testdata/commitBatch_input_codecv2.json
(0 hunks)
💤 Files with no reviewable changes (7)
- rollup/rollup_sync_service/l1client_test.go
- rollup/rollup_sync_service/testdata/commitBatchWithBlobProof_input_codecv3.json
- rollup/rollup_sync_service/abi_test.go
- rollup/rollup_sync_service/abi.go
- rollup/rollup_sync_service/testdata/commitBatch_input_codecv2.json
- rollup/rollup_sync_service/l1client.go
- rollup/rollup_sync_service/testdata/commitBatch_input_codecv1.json
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (17)
rollup/l1/types.go (1)
24-24
: LGTM! Clear and conventional mock struct naming.The empty struct follows Go conventions for mock implementations.
rollup/da_syncer/da/finalize.go (1)
7-9
: LGTM!The refactoring simplifies the
FinalizeBatch
struct by encapsulating the event data, enhancing code readability and maintainability.rollup/da_syncer/da/calldata_blob_source.go (1)
Line range hint
138-166
: LGTM! Error handling is thorough and consistent.The error handling provides good context by including batch indices and transaction hashes in error messages.
rollup/da_syncer/da/da.go (1)
32-32
: LGTM! Interface changes align with blob data support.The new methods enhance the interfaces to support blob data retrieval while maintaining backward compatibility:
Event()
provides access to the underlying rollup eventVersion()
,Chunks()
, andBlobVersionedHashes()
support blob data handlingAlso applies to: 38-40
rollup/da_syncer/da/revert.go (1)
Line range hint
7-29
: LGTM! RevertBatch refactor is clean and consistent.The struct has been simplified to use the event-centric approach, maintaining consistency with other event types.
rollup/da_syncer/da/commitV0.go (2)
18-25
: LGTM! Struct changes improve type safety.The changes enhance type safety with
encoding.CodecVersion
and maintain consistency with the event-centric approach.
Line range hint
141-142
: Address TODOs for block metadata.The method contains TODOs for difficulty and extra data values. These should be replaced with proper implementations.
Would you like me to help implement proper difficulty and extra data handling or create an issue to track this task?
core/rawdb/accessors_rollup_event.go (1)
21-22
: LGTM! Field formatting is consistent with codebase style.The
Version
field formatting maintains consistency with the codebase style.rollup/l1/reader.go (2)
142-142
: LGTM! Direct return of transaction data.Simple and efficient implementation.
146-158
: Improved error handling and support for multiple blob hashes.The changes enhance error reporting and support retrieving multiple blob hashes, which aligns with the PR's objective to support future blob data format. The error messages are descriptive and include relevant context.
eth/backend.go (1)
248-248
: LGTM! Added DA config to RollupSyncService initialization.The change properly integrates the data availability configuration into the rollup sync service initialization.
rollup/l1/abi.go (1)
161-179
: LGTM! Well-structured constructor for FinalizeBatchEvent.The constructor provides a clean and type-safe way to create
FinalizeBatchEvent
instances. Parameter ordering and naming are consistent with the struct fields.cmd/utils/flags.go (1)
1632-1640
: LGTM! Clean implementation of DA configuration flags.The implementation follows the established pattern for flag handling and properly validates flag existence before accessing values.
rollup/da_syncer/da/commitV1.go (4)
20-21
: LGTM! Well-structured field addition.The
versionedHashes
field is appropriately typed and positioned in the struct.
81-89
: LGTM! Clean constructor implementation.The constructor properly initializes both v0 and v1 fields while maintaining backward compatibility.
95-98
: LGTM! Clean getter implementation.The
BlobVersionedHashes
method follows Go conventions for getters.
Line range hint
1-98
: Verify the impact on batch processing performance.The implementation looks solid and well-structured. However, since this changes how batch data is processed, it would be good to verify:
- Performance impact of fetching and validating versioned hashes
- Memory usage with the new blob handling
- Backward compatibility with existing batches
Run the following script to check for potential performance impact:
✅ Verification successful
Implementation follows performance best practices
The code follows an efficient implementation pattern:
- Uses standard Go memory management without excessive allocations
- Performs only necessary cryptographic validations
- Maintains backward compatibility through clean inheritance
The added overhead from versioned hash validation is an essential security requirement that cannot be optimized further.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for potential performance bottlenecks in batch processing # Test 1: Search for batch processing metrics rg -A 2 "batch.*process.*time|batch.*duration" # Test 2: Search for memory allocation patterns in blob handling rg -A 2 "make.*\[\].*blob|new.*blob|alloc.*blob" # Test 3: Check for backward compatibility handling ast-grep --pattern 'if $version == "v0" || $version == "V0" { $$$ }'Length of output: 10495
func (m *MockNopClient) BlockNumber(ctx context.Context) (uint64, error) { | ||
return 0, nil | ||
} | ||
|
||
func (m *MockNopClient) ChainID(ctx context.Context) (*big.Int, error) { | ||
return big.NewInt(0), nil | ||
} | ||
|
||
func (m *MockNopClient) FilterLogs(ctx context.Context, q ethereum.FilterQuery) ([]types.Log, error) { | ||
return nil, nil | ||
} | ||
|
||
func (m *MockNopClient) HeaderByNumber(ctx context.Context, number *big.Int) (*types.Header, error) { | ||
return nil, nil | ||
} | ||
|
||
func (m *MockNopClient) HeaderByHash(ctx context.Context, hash common.Hash) (*types.Header, error) { | ||
return nil, nil | ||
} | ||
|
||
func (m *MockNopClient) SubscribeFilterLogs(ctx context.Context, query ethereum.FilterQuery, ch chan<- types.Log) (ethereum.Subscription, error) { | ||
return nil, nil | ||
} | ||
|
||
func (m *MockNopClient) TransactionByHash(ctx context.Context, txHash common.Hash) (tx *types.Transaction, isPending bool, err error) { | ||
return nil, false, nil | ||
} | ||
|
||
func (m *MockNopClient) BlockByHash(ctx context.Context, hash common.Hash) (*types.Block, error) { | ||
return nil, nil | ||
} | ||
|
||
func (m *MockNopClient) CallContract(ctx context.Context, msg ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) { | ||
return nil, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance mock implementation for better testing capabilities.
The current no-op implementation is too simplistic for effective testing. Consider these improvements:
- Add documentation explaining the mock's purpose and behavior
- Make return values configurable to support different test scenarios
- Add support for simulating error conditions
Here's a suggested implementation pattern:
+// MockNopClient implements Client interface for testing purposes.
+// It provides a configurable mock that can simulate both success and error scenarios.
type MockNopClient struct {
+ // Configuration fields
+ blockNumber uint64
+ chainID *big.Int
+ shouldError bool
+ // Add more fields as needed
}
+// NewMockNopClient creates a new MockNopClient with default settings
+func NewMockNopClient() *MockNopClient {
+ return &MockNopClient{
+ blockNumber: 0,
+ chainID: big.NewInt(0),
+ shouldError: false,
+ }
+}
+// SetBlockNumber configures the block number to return
+func (m *MockNopClient) SetBlockNumber(n uint64) {
+ m.blockNumber = n
+}
func (m *MockNopClient) BlockNumber(ctx context.Context) (uint64, error) {
+ if m.shouldError {
+ return 0, errors.New("mock error")
+ }
- return 0, nil
+ return m.blockNumber, nil
}
// Similar patterns for other methods...
Committable suggestion skipped: line range outside the PR's diff.
func (ds *CalldataBlobSource) SetL1Height(l1Height uint64) { | ||
ds.l1Height = l1Height | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation and documentation for state management methods.
The new methods expose and modify internal state:
SetL1Height
should validate that the new height doesn't exceedl1Finalized
L1Finalized
should document that the value might be stale until the nextNextData
call
+// SetL1Height updates the L1 block height for data fetching.
+// The new height must not exceed the finalized block height.
func (ds *CalldataBlobSource) SetL1Height(l1Height uint64) {
+ if l1Height > ds.l1Finalized {
+ panic("attempted to set L1 height beyond finalized height")
+ }
ds.l1Height = l1Height
}
+// L1Finalized returns the last known L1 finalized block height.
+// Note: This value might be stale until the next NextData call.
func (ds *CalldataBlobSource) L1Finalized() uint64 {
return ds.l1Finalized
}
Also applies to: 92-94
1. Purpose or design rationale of this PR
This PR refactors the rollup sync service to use
CalldataBlobSource
to retrieve data from L1. This removes a lot of duplicate code and makes future upgrades easier to support.Specifically, before this PR the rollup sync service relied solely on the batch information contained in the commit transaction's calldata. However, with a future upgrade we will move this calldata to the blob. Therefore, the rollup sync service needs to be able to retrieve and interpret the blob data additionally.
This functionality is already given within the
L1 follower mode
which is why this PR reuses the internal componentCalldataBlobSource
to do this job. To support another upgrade (new batch version) changes need to be made only withinCalldataBlobSource
.2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Bug Fixes
Refactoring
Testing