feat(pprof): Supports both heap profiling and heap sampling #1684
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In XiaoMi/rdsn#433, we updated the way to get heap profile
by using
instead of
It provides a way to analyse which pieces of code allocated (and possibly freed)
how much memory during the time the request processed on the server. However, in
the scenario of a server already in heavy memory consumption but growing very
slow, it's hard to tell which pieces of code allocated the most of the memory.
This patch adds the heap sampling back, and keep the heap profiling as well.
Both of the two ways are using the
pprof/heap
method, the difference is whetherthe
seconds
parameter appears.When the
seconds
parameter appears, usingGetHeapProfile()
, otherwise, usingGetHeapSample()
. Remember to set environment variable TCMALLOC_SAMPLE_PARAMETERwhen using heap sampling.