-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
share aggregation limit on node #5357
Conversation
On SSD:ERROR: Not the same queries, cannot compare, difference: {'date_histogram_by_minute', 'term_aggregation_high_card_top_100', 'term_aggregation_high_card_top_1000', 'range_agg', 'term_aggregation_avg_per_log_level', 'date_histogram_by_hour', 'range_agg_with_date_histogram_with_metrics', 'range_agg_with_date_histogram'} On GCS:ERROR: Not the same queries, cannot compare, difference: {'range_agg_with_date_histogram', 'date_histogram_by_hour', 'date_histogram_by_minute', 'term_aggregation_avg_per_log_level', 'term_aggregation_high_card_top_1000', 'term_aggregation_high_card_top_100', 'range_agg', 'range_agg_with_date_histogram_with_metrics'} |
@@ -9,17 +9,17 @@ json: | |||
match: | |||
count: 10 | |||
sort: | |||
- count: {"order" : "desc"} | |||
- id: {"order" : "desc"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this change about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this fixes the flakiness in the test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, this PR makes it so that a new query will fail if running the query would cause the allocated memory over the whole node to exceed aggregation_memory_limit
, correct?
Can we document that on the docstring of aggregation_memory_limit
and in the documentation of Quickwit?
Changes how aggregation limits are shared. Previously they were per request, now they are per node. The counters are only updated after an allocation in tantivy, which should keep the contention risk relatively low.
Yes, except we need to execute the query and will abort it once it exceeds the memory, as we don't know up front how much memory it needs. (would sounds like we could do that up front)
Updated the docs |
Changes how aggregation limits are shared. Previously they were per
request, now they are per node.
The counters are only updated after an allocation in tantivy, which
should keep the contention risk relatively low.
fix flaky tests