Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Return zst Decoder to sync.pool when finished #770

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

tobbe76
Copy link
Contributor

@tobbe76 tobbe76 commented Aug 23, 2024

zstd Decoder was not returned to sync.pool so a new instance was allocated each time. This reduced performance.
We can now handle higher loads on the bazel-remote server.

defered close for dec was broken due to shadowed variable so removed.
Added defer close in the go func that calls cache.Put
With this change we can handle higher load on the server without crash due
GOMAXPROCS.
mostynb added a commit to mostynb/bazel-remote that referenced this pull request Aug 23, 2024
…estream Writes

This is an alternative fix to buchgr#770.

Thanks to Thorbjorn Larsson @tobbe76 for reporting this issue.
@mostynb
Copy link
Collaborator

mostynb commented Aug 23, 2024

Hi, thanks for the fix- out of curiosity, how did you find this? I wonder if there's a better linter I could be using here.

I propose that we fix this in a slightly different way (see #771) which covers more cases than this PR- in particular, if the err = dec.Reset(pr) line returns a non-nill error in this PR then the decoder won't be returned to the pool.

@tobbe76
Copy link
Contributor Author

tobbe76 commented Aug 24, 2024

Thanks @mostynb I'm a golang beginner took a while to understand the sync.pool.

I used http://remote/debug/pprof/heap on the live server and found a huge total count for this object.


I can also see miljons of allocations here

uncompressedChunk := make([]byte, chunkSize)

I have added a sync pool here also and tested it locally with good results but not yet tested on the production server.
What do you think, is it a good idea?


Why do a fsync here? It's not good for ssd performance.

mostynb added a commit that referenced this pull request Aug 24, 2024
…estream Writes

This is an alternative fix to #770.

Thanks to Thorbjorn Larsson @tobbe76 for reporting this issue.
@mostynb
Copy link
Collaborator

mostynb commented Aug 24, 2024

Thanks @mostynb I'm a golang beginner took a while to understand the sync.pool.

I used http://remote/debug/pprof/heap on the live server and found a huge total count for this object. I can also see miljons of allocations here

uncompressedChunk := make([]byte, chunkSize)

I have added a sync pool here also and tested it locally with good results but not yet tested on the production server.
What do you think, is it a good idea?

Sounds good- do you want to open a new PR for that?

Why do a fsync here? It's not good for ssd performance.

That is done to reduce the chance of cache corruption. I think this is most important for action cache entries which can't be verified by their hash and large/compressed CAS blobs which would be costly to verify.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants