You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think I came up with a better approach to handle the Shopify API rate limit than the current one that tracks the number of available calls per API token.
I'll describe the approach storing everything in memory but it will work with something like Redis making it a scalable approach.
It will work exactly the same way for GraphQL and Rest. I'll use Rest as example.
The idea is to have a queue per API token storing the calls that hit the rate limit. The data structure (in memory) would be a dictionary with an identifier of the API token (store name for offline token and store name + staff id for online token) as keys and a python queue as value.
The execute_rest method is called on an OfflineToken
Start loop to check the dictionary for a queue and if there is a queue, check if it's empty or not, check the first element
If there is something in the queue, then queue a hash with the call parameters and WAIT 1 second
If the queue doesn't exist or it is empty or if the first element hash matches this call hash, then exit the loop
Retrieve the token from the DB
Execute the Rest API call and RETURN
The clear advantage here over what we are doing now is that when the rate limit is not hit, the call just goes through and nothing is stored in memory.
Comments and feedback please! 🙏
The text was updated successfully, but these errors were encountered:
I think I came up with a better approach to handle the Shopify API rate limit than the current one that tracks the number of available calls per API token.
execute_rest
method is called on an OfflineTokenComments and feedback please! 🙏
The text was updated successfully, but these errors were encountered: