Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate Limiting #12

Closed
0x4007 opened this issue Apr 9, 2024 · 21 comments · Fixed by ubiquibot/comment-incentives#35
Closed

Rate Limiting #12

0x4007 opened this issue Apr 9, 2024 · 21 comments · Fixed by ubiquibot/comment-incentives#35

Comments

@0x4007
Copy link
Member

0x4007 commented Apr 9, 2024

+ Evaluating results. Please wait...

We got rate limited https://github.com/ubiquibot/comment-incentives/actions/runs/8617933701/job/23619113189

Seems like this can be a problem as we pick up on activity in our network.

Originally posted by @0x4007 in #5 (comment)

@gentlementlegen
Copy link
Member

Since this repo should be soon deprecated, do you want a fix here? I guess this problem is also relevant to the new system as well.

@0x4007
Copy link
Member Author

0x4007 commented Apr 13, 2024

Yeah research and implementation should be handled wherever possible and can be ported easily later if needed.

@gitcoindev
Copy link
Contributor

I did a research on this today. Starting from where and why it happens to how to solve:

  1. The root cause for late limit is that number of queued calls for JsonRPCProvider used crosses the 500 calls in the queue limit.

  2. The actual rate limit error from the permit-generation RPC provider was thrown by Nethermind client

https://github.com/NethermindEth/nethermind/blob/489f3277eddfba5b514d2c7779094b6981ec629e/src/Nethermind/Nethermind.JsonRpc/Modules/BoundedModulePool.cs#L34

which is set to 500 calls as a default on the client side:

https://github.com/NethermindEth/nethermind/blob/489f3277eddfba5b514d2c7779094b6981ec629e/src/Nethermind/Nethermind.Init/Steps/RegisterRpcModules.cs#L113
https://github.com/NethermindEth/nethermind/blob/489f3277eddfba5b514d2c7779094b6981ec629e/src/Nethermind/Nethermind.JsonRpc/JsonRpcConfig.cs#L18

One of the ways to circumvent this is to use ethers module throttling feature, but as discovered in the ethers implementation, kicks in only if the 429 error code is returned in the response header, see:
ethers-io/ethers.js@7d43545

and if (response.statusCode === 429 && attempt < attemptLimit)

  1. Since different providers may use different error codes (Nethermind returned -32005 error) and different rate limits, I propose to use https://github.com/ubiquity/rpc-handler which is able to return the fastest provider and array of available providers for a given network.

We can use the current provider as a default. If rate limit is reached, use rpc-handler to try next fastest provider from the list.

I can experiment, first try to reproduce and error on a fork or using a test and submit a fix today / tomorrow.

@gitcoindev
Copy link
Contributor

/start

Copy link

ubiquibot bot commented Apr 15, 2024

DeadlineTue, Apr 16, 10:34 AM UTC
Registered Wallet 0x7e92476D69Ff1377a8b45176b1829C4A5566653a
Tips:
  • Use /wallet 0x0000...0000 if you want to update your registered payment wallet address.
  • Be sure to open a draft pull request as soon as possible to communicate updates on your progress.
  • Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the task.

@gitcoindev
Copy link
Contributor

@gentlementlegen , just to be sure we are not crossing work on tasks, please let me know if you started working on this already I can as well pass the task to you.

@gentlementlegen
Copy link
Member

@gitcoindev nop just did some research but no work done, all yours!

@gitcoindev
Copy link
Contributor

hi @Keyrxng I have been experimenting with https://github.com/ubiquity/rpc-handler this week and I get kind of unstable results when I run actions https://github.com/korrrba/comment-incentives/actions (hope you have access there if not I can grant you access). I added some loops during integration to load / stress test and I usually have to run the action twice to become green , I often get Error: could not detect network (event="noNetwork", code=NETWORK_ERROR, version=providers/5.7.2).

I have three questions about rpc-handler, which I think is a great tool:

  1. I used import { RPCHandler } from "@keyrxng/rpc-handler"; . I understand that I should use polymorphic with improved exports ubiquity/rpc-handler#6 after merged and deployed to npm registry instead?

  2. Do you have a working reference integration somewhere already to any bot plugins?

  3. If we get rate limited my idea to select next fastest rpc provider. I had a look at tests/benchmark.test.ts and I see you sort by latencies, perhaps we could have a function that would returned list of providers sorted by latency, then we could just iterate to get next available fastest rpc, do you have a similar or other idea on this?

@Keyrxng
Copy link
Contributor

Keyrxng commented Apr 19, 2024

  1. https://www.npmjs.com/package/@keyrxng/rpc-handler (I've published the polymorphic pkg already under my name, maybe that'll respond better for you) but yeah the updated package is better

  2. a server-side integration I have not done yet no, closest is the test cases. I noticed in the feeble stress tests I done that first touch was always far slower/error prone than subsequent touches. I assume you are being rate limited after a few successful calls to a particular RPC or is it erroneously returning a provider which cannot be called in the first place?

  3. This is implemented in the upgrade already yeah (I click test multiple times which is why they are changing)

bandicam.2024-04-19.09-55-35-550.mp4

@gitcoindev
Copy link
Contributor

I gave some more thought over this over the weekend, and found a better, I think a proper solution for rate limits that can be applied to any RPC without switching the provider.

I designed a simple load testing code that almost always causes any rpc provider to hit the rate limit:

import { JsonRpcProvider } from "@ethersproject/providers";
import { useHandler } from "./helpers/rpc-handler";
import { getTokenSymbol } from "./helpers/contracts";

export async function rateLimitTest() {
  let tokenSymbol;
  for (let i=1; i<=1000; i++) {
    tokenSymbol = getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
    console.log(i, tokenSymbol);
  }
  tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
  tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
  tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
  console.log(tokenSymbol);
}

The code triggers 1000 RPC calls / promises to getTokenSymbol, which are left in the pending state. Then calls 3 times the same RPC call with await, waiting for the result. The provider overloaded with the previous calls hits the rate limit.

I evaluated a few possible solutions. When throttling does not help, it is best to simply wait and retry RPC call after a delay. I selected two possible typescript libraries that can help to achieve this: https://github.com/sindresorhus/p-retry and https://github.com/franckLdx/ts-retry , both in active development and having healthy weekly downloads.

The drawback of p-retry is that it provides ESM only module and I encountered commonjs / esm coexistence hell with Jest and Babel (jestjs/jest#13739). The functionality works, but tests failed. On the other hand ts-retry provides support both for ESM and commonjs and contains a few additional useful functions and decorators.

For example it is possible to retry async function until it returns a 'defined' value, e.g.

import { retryAsyncUntilDefined } from "ts-retry";
import { JsonRpcProvider } from "@ethersproject/providers";
import { useHandler } from "../../helpers/rpc-handler";

  const provider = await retryAsyncUntilDefined<JsonRpcProvider>(
    async () => {
    const rpcHandler = useHandler(config.payments.evmNetworkId);
    return await rpcHandler.getFastestRpcProvider()},
    { delay: 1000, maxTry: 5 }
  );

The code above will retry five times waiting 1 second between each retry to select the fastest rpc provider if the result was returned as undefined for any reason, e.g. network lost.

The solution for my initial load test using ts-retry is :

let tokenSymbol;
  for (let i=1; i<=1000; i++) {
    tokenSymbol = getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
    console.log(i, tokenSymbol);
  }
  tokenSymbol = await retryAsync<string>(
    async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
    { delay: 1000, maxTry: 5 }
  );
  tokenSymbol = await retryAsync<string>(
    async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
    { delay: 1000, maxTry: 5 }
  );
  tokenSymbol = await retryAsync<string>(
    async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
    { delay: 1000, maxTry: 5 }
  );
  console.log(tokenSymbol);

which always passes the test:

  console.log
    999 Promise { <pending> }

      at log (src/index.test.ts:72:13)

  console.log
    1000 Promise { <pending> }

      at log (src/index.test.ts:72:13)

  console.log
    WXDAI

      at log (src/index.test.ts:88:11)

  console.log
    done

      at log (src/index.test.ts:95:15)

 PASS  src/index.test.ts (28.501 s)
  ✓ runs (26768 ms)

Test Suites: 1 passed, 1 total

Therefore I will open a pull request to this repo adding ts-retry retryAsync wrappers to RPC provider setup and RPC calls, this should fix rate limit hits. A similar solution can be later applied to any rate limit hit scenarios in other repositories / plugins.

@gitcoindev
Copy link
Contributor

/start

Copy link

ubiquibot bot commented Apr 21, 2024

! Skipping '/start' because the issue is already assigned.

@gentlementlegen
Copy link
Member

@gitcoindev thanks for your research. While this works, it really depends on the endpoint for the duration you have to wait until the next call is successful. I had cases where it was seconds, and other where it was minutes. Benefits of switching RPC is to have it available right away. Would this make the implementation way more complex?

@0x4007
Copy link
Member Author

0x4007 commented Apr 22, 2024

Good research but I'm always skeptical of "time based" solutions compared to "event based"

What @gentlementlegen mentions is an example of why this solution might not be the best approach.

@gitcoindev
Copy link
Contributor

@0x4007 @gentlementlegen sure I will rework this to switch RPC. Perhaps combine two approaches together, it should make it even more robust.

@gitcoindev
Copy link
Contributor

I tested multiple times and the following always seems to work: for any call use the default provider, in case of an error immediately switch to the fastest available provided by rpc handler using onError: async () => provider = await rpcHandler.getFastestRpcProvider().

export async function rateLimitTest() {
  const rpcHandler: RPCHandler = useHandler(100);

  let provider = await retryAsyncUntilDefined<JsonRpcProvider>(
    async () => new ethers.providers.JsonRpcProvider("https://rpc.gnosischain.com"),
    { maxTry: 5 }
  );

  let tokenSymbol;
  for (let i=1; i<=1000; i++) {
    tokenSymbol = getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
    console.log(i, tokenSymbol);
  }

  tokenSymbol = await retryAsync<string>(
    async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
    { 
      maxTry: 5,
      onError: async () => provider = await rpcHandler.getFastestRpcProvider()
    }
  );

  tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
  console.log(tokenSymbol);
  tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
  console.log(tokenSymbol);
  tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
  console.log(tokenSymbol);
}

I will update the pull request now. In any case, we can always think of more sophisticated scenarios, but this one seems quite robust as long as the rpc handler works correctly.

@gitcoindev
Copy link
Contributor

/start

Copy link

ubiquibot bot commented Apr 22, 2024

DeadlineTue, Apr 23, 4:13 PM UTC
Registered Wallet 0x7e92476D69Ff1377a8b45176b1829C4A5566653a
Tips:
  • Use /wallet 0x0000...0000 if you want to update your registered payment wallet address.
  • Be sure to open a draft pull request as soon as possible to communicate updates on your progress.
  • Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the task.

@gitcoindev
Copy link
Contributor

@gentlementlegen I updated https://github.com/ubiquibot/comment-incentives/pull/35/files , would be grateful if you re-reviewed, you can also test from your side.

Copy link

ubiquibot bot commented Apr 23, 2024

+ Evaluating results. Please wait...

Copy link

ubiquibot bot commented Apr 23, 2024

[ 16.8 WXDAI ]

@0x4007
Contributions Overview
ViewContributionCountReward
IssueSpecification17.2
IssueComment29.6
Conversation Incentives
CommentFormattingRelevanceReward
> ```diff > + Evaluating results. Please wait... > ```

We g...

7.2

code:
  count: 1
  score: "1"
  words: 0
17.2
Yeah research and implementation should be handled wherever poss...
3.40.593.4
Good research but I'm always skeptical of "time based" solutions...
6.20.786.2

[ 9.7 WXDAI ]

@gentlementlegen
Contributions Overview
ViewContributionCountReward
IssueComment39.7
Conversation Incentives
CommentFormattingRelevanceReward
Since this repo should be soon deprecated, do you want a fix her...
2.60.562.6
@gitcoindev nop just did some research but no work done, all you...
1.20.81.2
@gitcoindev thanks for your research. While this works, it reall...
5.90.675.9

[ 540.1 WXDAI ]

@gitcoindev
Contributions Overview
ViewContributionCountReward
IssueTask1400
IssueComment70
IssueComment7140.1
Conversation Incentives
CommentFormattingRelevanceReward
I did a research on this today. Starting from where and why it h...
-
li:
  count: 3
  score: "0"
  words: 43
code:
  count: 3
  score: "0"
  words: 12
0.99-
@gentlementlegen , just to be sure we are not crossing work on t...
-0.77-
hi @Keyrxng I have been experimenting with https://github.com/u...
-
li:
  count: 3
  score: "0"
  words: 0
code:
  count: 2
  score: "0"
  words: 24
0.84-
I gave some more thought over this over the weekend, and found a...
-
code:
  count: 4
  score: "0"
  words: 0
0.71-
@0x4007 @gentlementlegen sure I will rework this to switch RPC....
-0.86-
I tested multiple times and the following always seems to work: ...
-
code:
  count: 2
  score: "0"
  words: 7
0.93-
@gentlementlegen I updated https://github.com/ubiquibot/comment-...
-0.95-
I did a research on this today. Starting from where and why it h...
31.5
li:
  count: 3
  score: "3"
  words: 43
code:
  count: 3
  score: "3"
  words: 12
0.9931.5
@gentlementlegen , just to be sure we are not crossing work on t...
3.20.773.2
hi @Keyrxng I have been experimenting with https://github.com/u...
25.6
li:
  count: 3
  score: "3"
  words: 0
code:
  count: 2
  score: "2"
  words: 24
0.8425.6
I gave some more thought over this over the weekend, and found a...
57.5
code:
  count: 4
  score: "4"
  words: 0
0.7157.5
@0x4007 @gentlementlegen sure I will rework this to switch RPC....
2.20.862.2
I tested multiple times and the following always seems to work: ...
17.5
code:
  count: 2
  score: "2"
  words: 7
0.9317.5
@gentlementlegen I updated https://github.com/ubiquibot/comment-...
2.60.952.6

[ 13.95 WXDAI ]

@Keyrxng
Contributions Overview
ViewContributionCountReward
IssueComment113.95
Conversation Incentives
CommentFormattingRelevanceReward
1. https://www.npmjs.com/package/@keyrxng/rpc-handler (I've publ...
13.95
li:
  count: 3
  score: "0.75"
  words: 0
0.7613.95

Keyrxng pushed a commit to ubq-testing/permit-generation that referenced this issue Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants