Skip to content
This repository has been archived by the owner on May 30, 2024. It is now read-only.

Enable retries in the std.py backend. #31

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

ShaheedHaque
Copy link

Resolves #30. This PR also:

  • Replaces the deprecated call to requests.session() with its
    current counterpart, requests.Session().
  • Allows arbitrary connection kwargs to be passed down, not least
    to allow the new retry behaviour to be overridden.

…lso:

- Replaces the deprecated call to requests.session() with its
  current counterpart, requests.Session().
- Allows arbitrary connection kwargs to be passed down, not least
  to allow the new retry behaviour to be overridden.
@codecov
Copy link

codecov bot commented Feb 20, 2021

Codecov Report

Merging #31 (8622b26) into master (7aa1ff4) will increase coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #31      +/-   ##
==========================================
+ Coverage   94.45%   94.47%   +0.01%     
==========================================
  Files           6        6              
  Lines        1840     1846       +6     
==========================================
+ Hits         1738     1744       +6     
  Misses        102      102              
Flag Coverage Δ
unittests 94.47% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
consul/std.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7aa1ff4...8622b26. Read the comment docs.

@ShaheedHaque
Copy link
Author

Hi, is there any chance of a release with this PR incorporated? FWIW, there is a Celery fix (celery/celery#5605) which benefits from this, and which I had wanted to delay before this was available but I feel I need to pursue at this time.

ShaheedHaque added a commit to ShaheedHaque/celery that referenced this pull request Jun 27, 2021
…ponses

from Consul with the outbound Celery request that caused it. This leaves
it prone to mistaking the (final) response from an operation N as the
response to an (early) part of operation N + 1.

This changes fix that by using a separate connection for each request.
That of course has the downside of (a) being relatively expensive and (b)
increasing the rate of connection requests into Consul:

- The former is annoying, but at least the backend works reliably.

- The latter can cause Consul to reject excessive connection attempt, but
  if it does, at least it returns a clear indication of this (IIRC, it
  responds with an HTTP 429"too many connections" indication).

  Additionally, this issue can be ameliorated by enabling retries in the
  python-consul2 (which I believe should be turned on regards less to handle
  transient network issues). This is fixed by the PR in
  https:/github.com/poppyred/python-consul2/pull/31.

Note that we have never seen (b) outside a test specifically trying to hammer
the system, but we see (a) all the time in our normal system tests.
ShaheedHaque added a commit to ShaheedHaque/celery that referenced this pull request Jun 28, 2021
…ponses

from Consul with the outbound Celery request that caused it. This leaves
it prone to mistaking the (final) response from an operation N as the
response to an (early) part of operation N + 1.

This changes fix that by using a separate connection for each request.
That of course has the downside of (a) being relatively expensive and (b)
increasing the rate of connection requests into Consul:

- The former is annoying, but at least the backend works reliably.

- The latter can cause Consul to reject excessive connection attempt, but
  if it does, at least it returns a clear indication of this (IIRC, it
  responds with an HTTP 429"too many connections" indication).

  Additionally, this issue can be ameliorated by enabling retries in the
  python-consul2 (which I believe should be turned on regards less to handle
  transient network issues). This is fixed by the PR in
  https:/github.com/poppyred/python-consul2/pull/31.

Note that we have never seen (b) outside a test specifically trying to hammer
the system, but we see (a) all the time in our normal system tests.

To opt-out from the new behaviour add a parameter "one_client=1" to the
connection URL.
ShaheedHaque added a commit to ShaheedHaque/celery that referenced this pull request Aug 10, 2021
…ponses

from Consul with the outbound Celery request that caused it. This leaves
it prone to mistaking the (final) response from an operation N as the
response to an (early) part of operation N + 1.

This changes fix that by using a separate connection for each request.
That of course has the downside of (a) being relatively expensive and (b)
increasing the rate of connection requests into Consul:

- The former is annoying, but at least the backend works reliably.

- The latter can cause Consul to reject excessive connection attempt, but
  if it does, at least it returns a clear indication of this (IIRC, it
  responds with an HTTP 429"too many connections" indication).

  Additionally, this issue can be ameliorated by enabling retries in the
  python-consul2 (which I believe should be turned on regards less to handle
  transient network issues). This is fixed by the PR in
  https:/github.com/poppyred/python-consul2/pull/31.

Note that we have never seen (b) outside a test specifically trying to hammer
the system, but we see (a) all the time in our normal system tests.

To opt-out from the new behaviour add a parameter "one_client=1" to the
connection URL.
auvipy pushed a commit to celery/celery that referenced this pull request Aug 11, 2021
…6823)

* As per #5605, the Consul backend does not cleanly associate responses
from Consul with the outbound Celery request that caused it. This leaves
it prone to mistaking the (final) response from an operation N as the
response to an (early) part of operation N + 1.

This changes fix that by using a separate connection for each request.
That of course has the downside of (a) being relatively expensive and (b)
increasing the rate of connection requests into Consul:

- The former is annoying, but at least the backend works reliably.

- The latter can cause Consul to reject excessive connection attempt, but
  if it does, at least it returns a clear indication of this (IIRC, it
  responds with an HTTP 429"too many connections" indication).

  Additionally, this issue can be ameliorated by enabling retries in the
  python-consul2 (which I believe should be turned on regards less to handle
  transient network issues). This is fixed by the PR in
  https:/github.com/poppyred/python-consul2/pull/31.

Note that we have never seen (b) outside a test specifically trying to hammer
the system, but we see (a) all the time in our normal system tests.

To opt-out from the new behaviour add a parameter "one_client=1" to the
connection URL.

* Increase code coverage.

* Rewrite Consul backend documentation, and describe the options now
available.
jeyrce pushed a commit to jeyrce/celery that referenced this pull request Aug 25, 2021
…elery#6823)

* As per celery#5605, the Consul backend does not cleanly associate responses
from Consul with the outbound Celery request that caused it. This leaves
it prone to mistaking the (final) response from an operation N as the
response to an (early) part of operation N + 1.

This changes fix that by using a separate connection for each request.
That of course has the downside of (a) being relatively expensive and (b)
increasing the rate of connection requests into Consul:

- The former is annoying, but at least the backend works reliably.

- The latter can cause Consul to reject excessive connection attempt, but
  if it does, at least it returns a clear indication of this (IIRC, it
  responds with an HTTP 429"too many connections" indication).

  Additionally, this issue can be ameliorated by enabling retries in the
  python-consul2 (which I believe should be turned on regards less to handle
  transient network issues). This is fixed by the PR in
  https:/github.com/poppyred/python-consul2/pull/31.

Note that we have never seen (b) outside a test specifically trying to hammer
the system, but we see (a) all the time in our normal system tests.

To opt-out from the new behaviour add a parameter "one_client=1" to the
connection URL.

* Increase code coverage.

* Rewrite Consul backend documentation, and describe the options now
available.
@ShaheedHaque
Copy link
Author

@poppyred Any chance of a release with this merged please?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consul backend std.py is not resilient to connection errors
1 participant