-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADBDEV-6520: Refactor strings into arguments when deleting/inserting into a table #41
base: gpdb
Are you sure you want to change the base?
Conversation
Are there any evidence that "significantly less memory is spent" and "fixes the error duplicate key value violates unique constraint" ? |
That's obvious! String representations of integer arrays take significantly more memory. And uniqueness violations will no longer happen because deletion will always happen before insertion. |
Please describe in detail how I as reviewer, can check the changes in work:
|
There are no such tests.
This is practically difficult to do because you need a very large cluster with a very large number of active tables.
This is almost impossible because it would require a huge number of active tables.
This is more of a theoretical patch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This patch also fixes the error duplicate key ...
Please describe in more detail what the error is, how "the delete goes to one batch and the insert goes to another", and how the patch fixes this?
Refactor strings into arguments when deleting/inserting into a table
diskquota used long strings when deleting/inserting into the
diskquota.table_size table. This resulted in high memory consumption, both for
constructing such long strings and for parsing them. This patch reworks the
logic to use arguments consisting of arrays. As a result, significantly less
memory is spent, since the query itself is very short and there is no need to
waste memory on constructing and parsing it, array arguments are passed as is.
This patch also fixes the error duplicate key value violates unique constraint
"table_size_pkey". In the flush_to_table_size function, it may happen that when
updating the table size, the delete goes to one batch and the insert goes to
another, and the insert is performed earlier and a duplicate error occurs.
It is easier to view the changes with the "Hide whitespace" option enabled.