-
Notifications
You must be signed in to change notification settings - Fork 146
feat: improve migration performances #1174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughMultiple storage/bucket SQL migrations were rewritten to use precomputed row_number-based batching and temporary tables: migration 17 switched to a staged transactions_ids table with batched commits and notifications; migration 18 moved to type-based filtering and row_number batching; migration 27 was replaced by a no-op notice; migrations 28–33 adopt row_number paging and adjusted indexes. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (8)
🚧 Files skipped from review as they are similar to previous changes (3)
🧰 Additional context used🧠 Learnings (5)📓 Common learnings📚 Learning: 2025-07-19T18:35:34.260ZApplied to files:
📚 Learning: 2025-11-28T10:06:01.381ZApplied to files:
📚 Learning: 2025-05-20T13:48:07.455ZApplied to files:
📚 Learning: 2025-04-29T11:24:28.923ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
🔇 Additional comments (4)
Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## release/v2.3 #1174 +/- ##
================================================
+ Coverage 81.84% 81.88% +0.04%
================================================
Files 187 187
Lines 9033 9033
================================================
+ Hits 7393 7397 +4
+ Misses 1205 1203 -2
+ Partials 435 433 -2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ee80d0d to
73f6d6d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 8 files
Prompt for AI agents (all 1 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="internal/storage/bucket/migrations/33-fix-invalid-date-format/up.sql">
<violation number="1" location="internal/storage/bucket/migrations/33-fix-invalid-date-format/up.sql:37">
P2: `row_number()` is 1-based, so the new predicate drops one row per chunk and forces an extra loop whenever the total count is a multiple of `_batch_size`. Use `> _offset` / `<= _offset + _batch_size` to match the original pagination semantics.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
Summary by cubic
Speed up v2.3 migration scripts by switching to row-number batching, adding targeted indexes, and narrowing updates. This cuts runtime and lock contention on large datasets; the heavy PCV rebuild step is disabled in this backport.
Written for commit 38fc49c. Summary will update automatically on new commits.