-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chore] [deltatocumulative]: linear histograms #36486
[chore] [deltatocumulative]: linear histograms #36486
Conversation
switch dp := any(dp).(type) { | ||
case pmetric.NumberDataPoint: | ||
state := any(state).(pmetric.NumberDataPoint) | ||
data.Number{NumberDataPoint: state}.Add(data.Number{NumberDataPoint: dp}) | ||
case pmetric.HistogramDataPoint: | ||
state := any(state).(pmetric.HistogramDataPoint) | ||
data.Histogram{HistogramDataPoint: state}.Add(data.Histogram{HistogramDataPoint: dp}) | ||
case pmetric.ExponentialHistogramDataPoint: | ||
state := any(state).(pmetric.ExponentialHistogramDataPoint) | ||
data.ExpHistogram{DataPoint: state}.Add(data.ExpHistogram{DataPoint: dp}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This refactor effectively eliminates the need for the data package, as we no longer rely on type characteristics.
I'll refactor datapoint addition in a future PR, making this part more clear, maybe like this:
var add data.Aggregator = new(data.Add)
switch into := any(dp).(type) {
case pmetric.NumberDataPoint:
add.Numbers(into, dp)
case pmetric.HistogramDataPoint:
add.Histograms(into, dp)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a first pass only through the benchmark. I totally understand my comments are nitpicks, I'm just sharing my personal preference when it comes to code style.
From our conversations I understand you prefer a more "declarative" style, but to me it makes the code way harder to read since I expect thing to run in the order they are written. Scrolling up and down several times until I finally understand what it does makes code less readable in my opinion.
Again, not a blocker!
@ArthurSens while the benchmark is split by datatype, I only ran for sums as the processor part (stream tracking, etc) is the same across all datatypes. speed differences only come by add implementation |
expands the linear architecture to do exponential and fixed-width histograms.
e0e8700
to
0df2648
Compare
#### Description Removes the nested (aka overloading `streams.Map`) implementation. This has been entirely replaced by a leaner, "linear" implementation: - #35048 - #36486 <!--Describe what testing was performed and which tests were added.--> #### Testing Existing tests continue to pass unaltered <!--Describe the documentation added.--> #### Documentation not needed <!--Please delete paragraphs that you did not use before submitting.-->
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Finishes work started in open-telemetry#35048 That PR only partially introduced a less complex processor architecture by only using it for Sums. Back then I was not sure of the best way to do it for multiple datatypes, as generics seemed to introduce a lot of complexity regardless of usage. I since then did of a lot of perf analysis and due to the way Go works (see gcshapes), we do not really gain anything at runtime from using generics, given method calls are still dynamic. This implementation uses regular Go interfaces and a good old type switch in the hot path (ConsumeMetrics), which lowers mental complexity quite a lot imo. The value of the new architecture is backed up by the following benchmark: ``` goos: linux goarch: arm64 pkg: github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor │ sums.nested │ sums.linear │ │ sec/op │ sec/op vs base │ Processor/sums-8 56.35µ ± 1% 39.99µ ± 1% -29.04% (p=0.000 n=10) │ sums.nested │ sums.linear │ │ B/op │ B/op vs base │ Processor/sums-8 11.520Ki ± 0% 3.683Ki ± 0% -68.03% (p=0.000 n=10) │ sums.nested │ sums.linear │ │ allocs/op │ allocs/op vs base │ Processor/sums-8 365.0 ± 0% 260.0 ± 0% -28.77% (p=0.000 n=10) ``` <!--Describe what testing was performed and which tests were added.--> #### Testing This is a refactor, existing tests pass unaltered. <!--Describe the documentation added.--> #### Documentation not needed <!--Please delete paragraphs that you did not use before submitting.-->
…metry#36498) #### Description Removes the nested (aka overloading `streams.Map`) implementation. This has been entirely replaced by a leaner, "linear" implementation: - open-telemetry#35048 - open-telemetry#36486 <!--Describe what testing was performed and which tests were added.--> #### Testing Existing tests continue to pass unaltered <!--Describe the documentation added.--> #### Documentation not needed <!--Please delete paragraphs that you did not use before submitting.-->
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Finishes work started in open-telemetry#35048 That PR only partially introduced a less complex processor architecture by only using it for Sums. Back then I was not sure of the best way to do it for multiple datatypes, as generics seemed to introduce a lot of complexity regardless of usage. I since then did of a lot of perf analysis and due to the way Go works (see gcshapes), we do not really gain anything at runtime from using generics, given method calls are still dynamic. This implementation uses regular Go interfaces and a good old type switch in the hot path (ConsumeMetrics), which lowers mental complexity quite a lot imo. The value of the new architecture is backed up by the following benchmark: ``` goos: linux goarch: arm64 pkg: github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor │ sums.nested │ sums.linear │ │ sec/op │ sec/op vs base │ Processor/sums-8 56.35µ ± 1% 39.99µ ± 1% -29.04% (p=0.000 n=10) │ sums.nested │ sums.linear │ │ B/op │ B/op vs base │ Processor/sums-8 11.520Ki ± 0% 3.683Ki ± 0% -68.03% (p=0.000 n=10) │ sums.nested │ sums.linear │ │ allocs/op │ allocs/op vs base │ Processor/sums-8 365.0 ± 0% 260.0 ± 0% -28.77% (p=0.000 n=10) ``` <!--Describe what testing was performed and which tests were added.--> #### Testing This is a refactor, existing tests pass unaltered. <!--Describe the documentation added.--> #### Documentation not needed <!--Please delete paragraphs that you did not use before submitting.-->
…metry#36498) #### Description Removes the nested (aka overloading `streams.Map`) implementation. This has been entirely replaced by a leaner, "linear" implementation: - open-telemetry#35048 - open-telemetry#36486 <!--Describe what testing was performed and which tests were added.--> #### Testing Existing tests continue to pass unaltered <!--Describe the documentation added.--> #### Documentation not needed <!--Please delete paragraphs that you did not use before submitting.-->
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Finishes work started in open-telemetry#35048 That PR only partially introduced a less complex processor architecture by only using it for Sums. Back then I was not sure of the best way to do it for multiple datatypes, as generics seemed to introduce a lot of complexity regardless of usage. I since then did of a lot of perf analysis and due to the way Go works (see gcshapes), we do not really gain anything at runtime from using generics, given method calls are still dynamic. This implementation uses regular Go interfaces and a good old type switch in the hot path (ConsumeMetrics), which lowers mental complexity quite a lot imo. The value of the new architecture is backed up by the following benchmark: ``` goos: linux goarch: arm64 pkg: github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor │ sums.nested │ sums.linear │ │ sec/op │ sec/op vs base │ Processor/sums-8 56.35µ ± 1% 39.99µ ± 1% -29.04% (p=0.000 n=10) │ sums.nested │ sums.linear │ │ B/op │ B/op vs base │ Processor/sums-8 11.520Ki ± 0% 3.683Ki ± 0% -68.03% (p=0.000 n=10) │ sums.nested │ sums.linear │ │ allocs/op │ allocs/op vs base │ Processor/sums-8 365.0 ± 0% 260.0 ± 0% -28.77% (p=0.000 n=10) ``` <!--Describe what testing was performed and which tests were added.--> #### Testing This is a refactor, existing tests pass unaltered. <!--Describe the documentation added.--> #### Documentation not needed <!--Please delete paragraphs that you did not use before submitting.-->
…metry#36498) #### Description Removes the nested (aka overloading `streams.Map`) implementation. This has been entirely replaced by a leaner, "linear" implementation: - open-telemetry#35048 - open-telemetry#36486 <!--Describe what testing was performed and which tests were added.--> #### Testing Existing tests continue to pass unaltered <!--Describe the documentation added.--> #### Documentation not needed <!--Please delete paragraphs that you did not use before submitting.-->
Description
Finishes work started in #35048
That PR only partially introduced a less complex processor architecture by only using it for Sums.
Back then I was not sure of the best way to do it for multiple datatypes, as generics seemed to introduce a lot of complexity regardless of usage.
I since then did of a lot of perf analysis and due to the way Go works (see gcshapes), we do not really gain anything at runtime from using generics, given method calls are still dynamic.
This implementation uses regular Go interfaces and a good old type switch in the hot path (ConsumeMetrics), which lowers mental complexity quite a lot imo.
The value of the new architecture is backed up by the following benchmark:
Testing
This is a refactor, existing tests pass unaltered.
Documentation
not needed