-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate Limiting #12
Comments
Since this repo should be soon deprecated, do you want a fix here? I guess this problem is also relevant to the new system as well. |
Yeah research and implementation should be handled wherever possible and can be ported easily later if needed. |
I did a research on this today. Starting from where and why it happens to how to solve:
which is set to 500 calls as a default on the client side: https://github.com/NethermindEth/nethermind/blob/489f3277eddfba5b514d2c7779094b6981ec629e/src/Nethermind/Nethermind.Init/Steps/RegisterRpcModules.cs#L113 One of the ways to circumvent this is to use ethers module throttling feature, but as discovered in the ethers implementation, kicks in only if the and
We can use the current provider as a default. If rate limit is reached, use rpc-handler to try next fastest provider from the list. I can experiment, first try to reproduce and error on a fork or using a test and submit a fix today / tomorrow. |
/start |
Tips:
|
@gentlementlegen , just to be sure we are not crossing work on tasks, please let me know if you started working on this already I can as well pass the task to you. |
@gitcoindev nop just did some research but no work done, all yours! |
hi @Keyrxng I have been experimenting with https://github.com/ubiquity/rpc-handler this week and I get kind of unstable results when I run actions https://github.com/korrrba/comment-incentives/actions (hope you have access there if not I can grant you access). I added some loops during integration to load / stress test and I usually have to run the action twice to become green , I often get I have three questions about rpc-handler, which I think is a great tool:
|
bandicam.2024-04-19.09-55-35-550.mp4 |
I gave some more thought over this over the weekend, and found a better, I think a proper solution for rate limits that can be applied to any RPC without switching the provider. I designed a simple load testing code that almost always causes any rpc provider to hit the rate limit:
The code triggers 1000 RPC calls / promises to getTokenSymbol, which are left in the pending state. Then calls 3 times the same RPC call with await, waiting for the result. The provider overloaded with the previous calls hits the rate limit. I evaluated a few possible solutions. When throttling does not help, it is best to simply wait and retry RPC call after a delay. I selected two possible typescript libraries that can help to achieve this: https://github.com/sindresorhus/p-retry and https://github.com/franckLdx/ts-retry , both in active development and having healthy weekly downloads. The drawback of p-retry is that it provides ESM only module and I encountered commonjs / esm coexistence hell with Jest and Babel (jestjs/jest#13739). The functionality works, but tests failed. On the other hand ts-retry provides support both for ESM and commonjs and contains a few additional useful functions and decorators. For example it is possible to retry async function until it returns a 'defined' value, e.g.
The code above will retry five times waiting 1 second between each retry to select the fastest rpc provider if the result was returned as undefined for any reason, e.g. network lost. The solution for my initial load test using ts-retry is :
which always passes the test:
Therefore I will open a pull request to this repo adding ts-retry retryAsync wrappers to RPC provider setup and RPC calls, this should fix rate limit hits. A similar solution can be later applied to any rate limit hit scenarios in other repositories / plugins. |
/start |
! Skipping '/start' because the issue is already assigned. |
@gitcoindev thanks for your research. While this works, it really depends on the endpoint for the duration you have to wait until the next call is successful. I had cases where it was seconds, and other where it was minutes. Benefits of switching RPC is to have it available right away. Would this make the implementation way more complex? |
Good research but I'm always skeptical of "time based" solutions compared to "event based" What @gentlementlegen mentions is an example of why this solution might not be the best approach. |
@0x4007 @gentlementlegen sure I will rework this to switch RPC. Perhaps combine two approaches together, it should make it even more robust. |
I tested multiple times and the following always seems to work: for any call use the default provider, in case of an error immediately switch to the fastest available provided by rpc handler using
I will update the pull request now. In any case, we can always think of more sophisticated scenarios, but this one seems quite robust as long as the rpc handler works correctly. |
/start |
Tips:
|
@gentlementlegen I updated https://github.com/ubiquibot/comment-incentives/pull/35/files , would be grateful if you re-reviewed, you can also test from your side. |
+ Evaluating results. Please wait... |
Manifest File
We got rate limited https://github.com/ubiquibot/comment-incentives/actions/runs/8617933701/job/23619113189
Seems like this can be a problem as we pick up on activity in our network.
Originally posted by @0x4007 in #5 (comment)
The text was updated successfully, but these errors were encountered: