Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] On HashRedisStore, when loading coverage, get the latest report from a cache and defer current report generation to a background thread #499

Merged
merged 1 commit into from
Jan 24, 2024

Conversation

Drowze
Copy link
Contributor

@Drowze Drowze commented Jan 23, 2024

Running Coverband with HashRedisStore with a significantly big project (~4600 files) has proven difficult, as everytime we try to load the coverage page, Coverband does a lot of Redis HGETALL commands (2 per file) and blocks the request until the commands return.

This PR aims to mitigate the hit HashRedisStore has on Redis when generating the coverage report by introducing a caching layer to, so the HGETALL commands used to build the coverage report run inside background threads, in batches of a 250 (per file type, concurrently). All following requests will then receive the latest cache result, while a new cache is re-populated in the background.

This optimization improved load times of the report page from 30-40 seconds to 15-20 seconds (on a 2GB redis server with about 40% memory left). Tested with:

Benchmark.measure do
  Coverband::Reporters::Web.new.tap do |cov|
    cov.instance_variable_set(:@request,Rack::Request.new(Rack::MockRequest.env_for("/")))
  end.index
  nil
end

# worst case i.e. without any cache
# => #<Benchmark::Tms:0x00007f082f4ddee0 @cstime=0.0, @cutime=0.0, @label="", @real=36.507868222999605, @stime=2.697597, @total=36.615393, @utime=33.917795999999996>

# with a cache
# => #<Benchmark::Tms:0x00007f081aaf1558 @cstime=0.0, @cutime=0.0, @label="", @real=16.935891766999703, @stime=0.9192540000000005, @total=16.973998, @utime=16.054744>

@Drowze Drowze force-pushed the hash-redis-store-get-coverage-cache branch 3 times, most recently from a7d347c to 4d2407f Compare January 23, 2024 13:25
@Drowze Drowze force-pushed the hash-redis-store-get-coverage-cache branch from 4d2407f to 3f79c33 Compare January 23, 2024 14:02
Opt-in by configuration, this should speed-up report loading significantly
in projects with thousands of files
@Drowze Drowze force-pushed the hash-redis-store-get-coverage-cache branch from 3f79c33 to 07d8f7f Compare January 23, 2024 14:06
@danmayer
Copy link
Owner

ouch this one definitely adds a lot of complexity but perhaps the background thread and iterative caching is simpler than the approach I was considering with paging in the data... Since this is opt in and off by default, I feel like I can pull it in and we can get some feedback about how it performs for various folks that have been hitting performance issues.

@danmayer danmayer merged commit a2b8c85 into danmayer:main Jan 24, 2024
64 checks passed
@danmayer
Copy link
Owner

changes are in the 6.0.2 release, thanks so much for your contributions

@Drowze Drowze deleted the hash-redis-store-get-coverage-cache branch January 24, 2024 13:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants