Conversation
…datorSet directly
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f0ebde8912
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| } | ||
|
|
||
| err := s.TryAggregateProofForRequestID(ctx, req.RequestID) | ||
| proof, err := s.TryAggregateProofForRequestID(ctx, req.RequestID) |
There was a problem hiding this comment.
Route catch-up aggregation through the deduplicating queue
When aggregation.catchup.enabled is on and proof generation is slow (for example, ZK proofs taking longer than the 1m default interval), the same request can now be aggregated twice in parallel. HandleAggregationRequests is already consuming live requests via the workqueue (cmd/relay/root/app.go:557-569), but catch-up scans the latest epoch and calls TryAggregateProofForRequestID directly here, bypassing that de-duplication/in-flight tracking. Because a request remains visible in GetSignatureRequestsWithoutAggregationProof until the proof is eventually saved, the catch-up loop can start a second expensive proof for a request that a worker is already processing.
Useful? React with 👍 / 👎.
| for i := range workerCount { | ||
| go s.worker(ctx, i+1) |
There was a problem hiding this comment.
Reject non-positive aggregation worker counts
If an operator sets aggregation.worker-count to 0 or a negative value, this loop starts no workers and the app silently stops consuming queued aggregation requests. HandleSignatureProcessedMessage still enqueues request IDs, so live proof generation stalls unless the separate catch-up loop happens to rescue it later. Unlike signal.worker-count, the new aggregation setting is not validated anywhere in cmd/relay/root/config.go, so an invalid config is accepted at startup.
Useful? React with 👍 / 👎.
🧪 Test Coverage ReportCoverage: |
PR Type
Enhancement, Tests, Documentation
Description
Queue aggregation requests for concurrent workers
Add configurable proof catch-up scanning
Expose aggregation settings in CLI/config
Improve role logging and catch-up tests
Diagram Walkthrough
File Walkthrough
5 files
Start aggregation workers and catch-up loopShow validator aggregator and committer rolesQueue requests and scan for missing proofsLog committer validator index for debuggingAdd validator index and typed role checks1 files
Add aggregation worker and catch-up settings3 files
Update validator printer call signaturesUse `ValidatorSet` directly in network printersPass `ValidatorSet` into operator tree printer1 files
Test catch-up flow and direct aggregation1 files
Document aggregation workers and catch-up options