Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent Early Ordering Pushdown to Enable Aggregation Pushdown to MySQL #16278

Merged
merged 14 commits into from
Jul 4, 2024

Conversation

systay
Copy link
Collaborator

@systay systay commented Jun 27, 2024

Description

This PR introduces a crucial check in the pushOrderingUnderAggr method to prevent the premature pushdown of the ORDER BY clause during the query planning process. By ensuring that the ordering is not pushed down too early, we allow the aggregation to be effectively pushed down to MySQL, optimizing query execution.

Details:

Added a Check for Planning Phase:

  • When attempting to push Ordering under Aggregation, we first check which phase we are in.
  • This check ensures that the ordering is only pushed down after reaching the appropriate phase in the query planning process.
  • If the planning phase has not yet reached delegateAggregation, the method returns early, leaving the ordering intact.
    Rationale:

Optimization:

  • Pushing down the ORDER BY clause too early can interfere with the ability to push down the aggregation to MySQL.
  • Aggregation pushdown to MySQL is crucial for optimizing query performance, as it allows MySQL to handle the aggregation, reducing the data that needs to be processed by the application.
  • By preventing early ordering pushdown, we maintain the potential for aggregation pushdown, resulting in more efficient query execution.

Impact:

  • This change primarily affects the query planning phase, ensuring that the ORDER BY clause is only pushed down at the correct time.
  • It improves the overall performance of queries involving aggregation by leveraging MySQL's capabilities more effectively.

Related Issue(s)

Issue: #16279

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Copy link
Contributor

vitess-bot bot commented Jun 27, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Jun 27, 2024
@systay systay added Type: Enhancement Logical improvement (somewhere between a bug and feature) Component: Query Serving and removed NeedsWebsiteDocsUpdate What it says NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Jun 27, 2024
@github-actions github-actions bot added this to the v21.0.0 milestone Jun 27, 2024
@systay systay changed the title feat: optimise aggregation with ORDER BY Prevent Early Ordering Pushdown to Enable Aggregation Pushdown to MySQL Jun 27, 2024
@systay systay removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request labels Jun 27, 2024
Copy link

codecov bot commented Jun 27, 2024

Codecov Report

Attention: Patch coverage is 86.79245% with 14 lines in your changes missing coverage. Please review.

Project coverage is 68.71%. Comparing base (8685b9e) to head (102889a).
Report is 2 commits behind head on main.

Files Patch % Lines
go/vt/vtgate/planbuilder/operators/apply_join.go 69.23% 8 Missing ⚠️
...vt/vtgate/planbuilder/operators/offset_planning.go 93.33% 2 Missing ⚠️
...vtgate/planbuilder/plancontext/planning_context.go 91.30% 2 Missing ⚠️
.../vt/vtgate/planbuilder/operators/query_planning.go 66.66% 1 Missing ⚠️
...vt/vtgate/planbuilder/operators/queryprojection.go 87.50% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #16278      +/-   ##
==========================================
- Coverage   68.71%   68.71%   -0.01%     
==========================================
  Files        1547     1547              
  Lines      198290   198286       -4     
==========================================
- Hits       136248   136244       -4     
  Misses      62042    62042              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@systay systay marked this pull request as draft June 27, 2024 11:07
Signed-off-by: Andres Taylor <andres@planetscale.com>
systay added 2 commits July 1, 2024 17:06
Signed-off-by: Andres Taylor <andres@planetscale.com>
Signed-off-by: Andres Taylor <andres@planetscale.com>
Signed-off-by: Andres Taylor <andres@planetscale.com>
Signed-off-by: Andres Taylor <andres@planetscale.com>
@systay systay marked this pull request as ready for review July 2, 2024 05:50
@@ -369,14 +369,14 @@ func pushAggregationThroughApplyJoin(ctx *plancontext.PlanningContext, rootAggr

columns := &applyJoinColumns{}
output, err := splitAggrColumnsToLeftAndRight(ctx, rootAggr, join, !join.JoinType.IsInner(), columns, lhs, rhs)
join.JoinColumns = columns
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bug was not related to the rest of the PR, but it caused tests to fail, so it had to go.

}

col.GroupBy = addToGroupBy
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is another bug that was not related to the issue being found, but it caused some plans to be missing group by expressions

Copy link
Member

@GuptaManan100 GuptaManan100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -2108,9 +2108,9 @@
"Name": "user",
"Sharded": true
},
"FieldQuery": "select min(a.id), a.tcol1, weight_string(a.tcol1), weight_string(a.id) from `user` as a where 1 != 1 group by a.tcol1, weight_string(a.tcol1), weight_string(a.id)",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not just an optimization, the previous plan also looks wrong. It should have been selecting weight_string(min(a.id)) and not weight_string(a.id)

Copy link
Member

@harshit-gangal harshit-gangal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you publish some benchmarks as the PR description claims the impact is of improved performance?

Comment on lines -2188 to 2191
"FieldQuery": "select foo, col, weight_string(foo) from `user` where 1 != 1 group by col, foo, weight_string(foo)",
"FieldQuery": "select foo, col, weight_string(foo) from `user` where 1 != 1 group by foo, col, weight_string(foo)",
"OrderBy": "1 ASC, (0|2) ASC",
"Query": "select foo, col, weight_string(foo) from `user` where id between :vtg1 and :vtg2 group by col, foo, weight_string(foo) order by `user`.col asc, foo asc",
"Query": "select foo, col, weight_string(foo) from `user` where id between :vtg1 and :vtg2 group by foo, col, weight_string(foo) order by `user`.col asc, foo asc",
"Table": "`user`"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no impact on the output by this change.

Copy link
Member

@harshit-gangal harshit-gangal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add some e2e tests for the fix and any new cases added in the plan tests.

Copy link
Member

@harshit-gangal harshit-gangal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could not find how we improved the planning unless the improvements are in tpch cases.

@systay
Copy link
Collaborator Author

systay commented Jul 2, 2024

I could not find how we improved the planning unless the improvements are in tpch cases.

I added a plan to the bottom of aggr_cases.json.

https://github.com/vitessio/vitess/pull/16278/files#diff-991c94d18be04f57568bac5c003b73009599842b38ee859c489fa286325b9b30R7150

That query produces a very different plan on main:

{
  "QueryType": "SELECT",
  "Original": "select sum(user.type) from user join user_extra on user.team_id = user_extra.id group by user_extra.col order by user_extra.col",
  "Instructions": {
    "OperatorType": "Aggregate",
    "Variant": "Ordered",
    "Aggregates": "sum(0) AS sum(`user`.type)",
    "GroupBy": "1",
    "ResultColumns": 1,
    "Inputs": [
      {
        "OperatorType": "Sort",
        "Variant": "Memory",
        "OrderBy": "1 ASC",
        "Inputs": [
          {
            "OperatorType": "Join",
            "Variant": "Join",
            "JoinColumnIndexes": "L:0,R:0",
            "JoinVars": {
              "user_team_id": 1
            },
            "TableName": "`user`_user_extra",
            "Inputs": [
              {
                "OperatorType": "Route",
                "Variant": "Scatter",
                "Keyspace": {
                  "Name": "user",
                  "Sharded": true
                },
                "FieldQuery": "select `user`.type, `user`.team_id from `user` where 1 != 1",
                "Query": "select `user`.type, `user`.team_id from `user`",
                "Table": "`user`"
              },
              {
                "OperatorType": "Route",
                "Variant": "Scatter",
                "Keyspace": {
                  "Name": "user",
                  "Sharded": true
                },
                "FieldQuery": "select user_extra.col from user_extra where 1 != 1",
                "Query": "select user_extra.col from user_extra where user_extra.id = :user_team_id",
                "Table": "user_extra"
              }
            ]
          }
        ]
      }
    ]
  },
  "TablesUsed": [
    "user.user",
    "user.user_extra"
  ]
}

As you can see here, MySQL is not doing any aggregation at all.

Signed-off-by: Andres Taylor <andres@planetscale.com>
@systay
Copy link
Collaborator Author

systay commented Jul 2, 2024

                                                 │    old.txt    │               new.txt                │
                                                 │    sec/op     │    sec/op     vs base                │
ShardedAggrPushDown/user-100-user_extra-100-10      15.41m ±  6%   16.71m ± 12%   +8.41% (p=0.029 n=10)
ShardedAggrPushDown/user-100-user_extra-500-10      15.21m ±  3%   15.52m ±  1%   +1.99% (p=0.015 n=10)
ShardedAggrPushDown/user-100-user_extra-1000-10     15.23m ±  2%   15.59m ±  1%   +2.38% (p=0.005 n=10)
ShardedAggrPushDown/user-500-user_extra-100-10      73.57m ±  1%   46.18m ±  3%  -37.23% (p=0.000 n=10)
ShardedAggrPushDown/user-500-user_extra-500-10      74.65m ±  1%   76.05m ±  1%   +1.87% (p=0.002 n=10)
ShardedAggrPushDown/user-500-user_extra-1000-10     77.46m ± 12%   76.23m ±  2%        ~ (p=0.796 n=10)
ShardedAggrPushDown/user-1000-user_extra-100-10    143.79m ±  2%   58.23m ±  1%  -59.51% (p=0.000 n=10)
ShardedAggrPushDown/user-1000-user_extra-500-10     146.1m ±  1%   131.8m ±  1%   -9.78% (p=0.000 n=10)
ShardedAggrPushDown/user-1000-user_extra-1000-10    146.3m ±  2%   151.8m ±  1%   +3.80% (p=0.000 n=10)
geomean                                             55.08m         47.63m        -13.53%

                                                 │    old.txt    │                new.txt                │
                                                 │     B/op      │     B/op       vs base                │
ShardedAggrPushDown/user-100-user_extra-100-10     11.39Ki ±  1%   11.46Ki ±  2%        ~ (p=0.796 n=10)
ShardedAggrPushDown/user-100-user_extra-500-10     11.48Ki ±  4%   11.46Ki ±  4%        ~ (p=0.780 n=10)
ShardedAggrPushDown/user-100-user_extra-1000-10    11.39Ki ±  1%   11.37Ki ±  1%        ~ (p=0.837 n=10)
ShardedAggrPushDown/user-500-user_extra-100-10     13.34Ki ±  8%   12.09Ki ±  8%   -9.40% (p=0.019 n=10)
ShardedAggrPushDown/user-500-user_extra-500-10     49.11Ki ±  2%   49.13Ki ±  2%        ~ (p=0.962 n=10)
ShardedAggrPushDown/user-500-user_extra-1000-10    50.10Ki ±  2%   49.62Ki ±  1%        ~ (p=0.267 n=10)
ShardedAggrPushDown/user-1000-user_extra-100-10    15.34Ki ± 13%   12.46Ki ± 10%  -18.79% (p=0.001 n=10)
ShardedAggrPushDown/user-1000-user_extra-500-10    50.43Ki ±  5%   50.11Ki ±  4%   -0.65% (p=0.048 n=10)
ShardedAggrPushDown/user-1000-user_extra-1000-10   93.17Ki ±  2%   93.16Ki ±  2%        ~ (p=0.122 n=10)
geomean                                            24.78Ki         23.92Ki         -3.48%

                                                 │   old.txt   │              new.txt               │
                                                 │  allocs/op  │  allocs/op   vs base               │
ShardedAggrPushDown/user-100-user_extra-100-10      220.0 ± 0%    216.0 ± 0%  -1.82% (p=0.000 n=10)
ShardedAggrPushDown/user-100-user_extra-500-10      220.0 ± 0%    216.0 ± 0%  -1.82% (p=0.000 n=10)
ShardedAggrPushDown/user-100-user_extra-1000-10     220.0 ± 0%    216.0 ± 0%  -1.82% (p=0.000 n=10)
ShardedAggrPushDown/user-500-user_extra-100-10      220.0 ± 0%    216.0 ± 0%  -1.82% (p=0.000 n=10)
ShardedAggrPushDown/user-500-user_extra-500-10     1.022k ± 0%   1.018k ± 0%  -0.39% (p=0.000 n=10)
ShardedAggrPushDown/user-500-user_extra-1000-10    1.022k ± 0%   1.018k ± 0%  -0.39% (p=0.000 n=10)
ShardedAggrPushDown/user-1000-user_extra-100-10     220.0 ± 0%    216.0 ± 0%  -1.82% (p=0.000 n=10)
ShardedAggrPushDown/user-1000-user_extra-500-10    1.022k ± 0%   1.018k ± 0%  -0.39% (p=0.000 n=10)
ShardedAggrPushDown/user-1000-user_extra-1000-10   2.023k ± 0%   2.019k ± 0%  -0.20% (p=0.000 n=10)

@@ -5324,8 +5324,8 @@
"Name": "user",
"Sharded": true
},
"FieldQuery": "select count(*), :user_id + user_extra.id, weight_string(:user_id + user_extra.id) from user_extra where 1 != 1 group by :user_id + user_extra.id",
"Query": "select count(*), :user_id + user_extra.id, weight_string(:user_id + user_extra.id) from user_extra group by :user_id + user_extra.id",
"FieldQuery": "select count(*), :user_id + user_extra.id, weight_string(:user_id + user_extra.id) from user_extra where 1 != 1 group by :user_id + user_extra.id, weight_string(:user_id + user_extra.id)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the additional grouping by weight string make sense? 🤔 If so, can you explain why?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, they don't really make sense

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are needed for some specific cases, we have CI tests which are failing due to removal of weight_string changes.

{
"OperatorType": "Projection",
"Expressions": [
"sum(volume) * count(*) as revenue",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain why we are not passing volume as an index, like we do for the other columns?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are, the printing of the plan is just not showing it

systay added 2 commits July 3, 2024 10:22
Signed-off-by: Andres Taylor <andres@planetscale.com>
Signed-off-by: Andres Taylor <andres@planetscale.com>
@systay systay marked this pull request as draft July 3, 2024 10:45
Signed-off-by: Andres Taylor <andres@planetscale.com>
systay added 2 commits July 3, 2024 12:54
Signed-off-by: Andres Taylor <andres@planetscale.com>
Signed-off-by: Andres Taylor <andres@planetscale.com>
@systay systay marked this pull request as ready for review July 3, 2024 11:21
systay added 2 commits July 3, 2024 13:58
This reverts commit f7c0ef7.

Turns out we do need them.

Signed-off-by: Andres Taylor <andres@planetscale.com>
Signed-off-by: Andres Taylor <andres@planetscale.com>
Signed-off-by: Andres Taylor <andres@planetscale.com>
@systay systay merged commit 694a0cf into vitessio:main Jul 4, 2024
94 checks passed
@systay systay deleted the push-down-aggr branch July 4, 2024 12:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Query Serving Type: Enhancement Logical improvement (somewhere between a bug and feature)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants