You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now client requests are serialized which leads to long wait time for getting results for projects with many dependencies. A parallel implementation of querying the API can make things significantly faster and perhaps remove the need for a cache.
The text was updated successfully, but these errors were encountered:
Actually, the cache is required due to there being rate limiting on the new API. By caching we ensure that each package is only queried once per day, which means any particular user is less likely to hit the cache.
The rate limiting uses a "leaky bucket" algorithm. Depending on whether you are a registered user or not, the rate limit has a different starting number of requests (16 for non-registered, 64 for registered). Each request can be for 128 total packages. Requests replenish at a rate of one per minute.
So regardless of whether we add parallelization, we definitely don't want to get rid of the caching.
Right now client requests are serialized which leads to long wait time for getting results for projects with many dependencies. A parallel implementation of querying the API can make things significantly faster and perhaps remove the need for a cache.
The text was updated successfully, but these errors were encountered: