-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
avoid/minimize more deadlocks #16343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -43,6 +43,35 @@ def is_database_connection_error(e: Exception) -> bool: | |
| if isinstance(e, ProxyException) and e.type == ProxyErrorTypes.no_db_connection: | ||
| return True | ||
| return False | ||
|
|
||
| @staticmethod | ||
| def is_database_retriable_exception(e: Exception) -> bool: | ||
| """ | ||
| Returns True if the execption is from a condition (e.g. deadlock, broken connection, etc.) that should be retried. | ||
| """ | ||
| import re | ||
|
|
||
| if isinstance(e, DB_CONNECTION_ERROR_TYPES): # TODO: is this actually needed? | ||
| return True | ||
|
Comment on lines
+54
to
+55
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am keeping this just because it was the old logic. I have no idea, nor a way to practically test, if this is appropriate or not. |
||
|
|
||
| # Deadlocks should normally be retried. | ||
| # Postgres right now, on deadlock, triggers an exception similar to: | ||
| # Error occurred during query execution: ConnectorError(ConnectorError { user_facing_error: None, | ||
| # kind: QueryError(PostgresError { code: "40P01", message: "deadlock detected", severity: "ERROR", | ||
| # detail: Some("Process 3753505 waits for ShareLock on transaction 5729447; blocked by process 3755128.\n | ||
| # Process 3755128 waits for ShareLock on transaction 5729448; blocked by process 3753505."), column: None, | ||
| # hint: Some("See server log for query details.") }), transient: false }) | ||
| # Unfortunately there does not seem to be a easy way to properly parse that or otherwise detect the specific | ||
| # issue, so just match using a regular expression. This is definitely not ideal, but not much we can do about | ||
| # it. | ||
|
Comment on lines
+64
to
+66
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. At least, I could not find a better way. Suggestions are most welcome. |
||
| if re.search(r'\bConnectorError\b.*?\bQueryError\b.*?\bPostgresError\b.*?"40P01"', str(e), re.DOTALL): | ||
| return True | ||
|
|
||
| # TODO: add additional specific cases (be careful to not add exceptions that should not be retried!) | ||
| # If many more additional regular expressions are added, it may make sense to combine them into a single one, | ||
| # or use something like hyperscan. | ||
|
|
||
| return False | ||
|
|
||
| @staticmethod | ||
| def handle_db_exception(e: Exception): | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why remove the retry logic? @CAFxX
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the request does fail, what would happen ?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krrishdholakia the current retry logic seems to be broken... at least this is what our monitoring tells us (one single deadlock in the database immediately causes a spend exception, without any retry -- if the logic was working correctly there should be multiple deadlocks in a row before a single spend exception), and this seems to be confirmed by the definition of
DB_CONNECTION_ERROR_TYPES(I actually do not understand at all what thosehttpxexceptions are, or why they are considered relevant here; I just know that the logic seems to not be working)FWIW, I am not removing the retry logic; I just moved it to a single place (
_handle_db_exception_retriable) - and hopefully fixed it as well to actually handle postgres deadlocks - since it's duplicated with small variations in so many different places