[Proposal] On HashRedisStore, when loading coverage, get the latest report from a cache and defer current report generation to a background thread #499
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Running Coverband with HashRedisStore with a significantly big project (~4600 files) has proven difficult, as everytime we try to load the coverage page, Coverband does a lot of Redis HGETALL commands (2 per file) and blocks the request until the commands return.
This PR aims to mitigate the hit HashRedisStore has on Redis when generating the coverage report by introducing a caching layer to, so the HGETALL commands used to build the coverage report run inside background threads, in batches of a 250 (per file type, concurrently). All following requests will then receive the latest cache result, while a new cache is re-populated in the background.
This optimization improved load times of the report page from 30-40 seconds to 15-20 seconds (on a 2GB redis server with about 40% memory left). Tested with: