-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate Potential to bypass LogRecord cloning when emitting LogRecord to LogProcessor(s) #1650
Comments
I meant to say - its best if we can do this, irrespective of the number of processors, not just optimize for the 1-processor scenario. Span current does the special case for single processor..But I think it should be possible to always pass ref or mut ref to processors. |
updated title accordingly. The problem is, as of now there is no-single owner of the created LogRecord. So, to achieve passing ref/mut-ref, we need to first have an owner of this LogRecord, and then ensure that the record is not dropped while any of the references is being used. Need some investigation. |
Ok, It was much simpler to add a special case to avoid LogData cloning in case of single processor ( I believe that would be the most common use-case). Have added that change as part of #1636 (as changes are closely tied to that PR). I can see vast improvement in stress test with that: Main branch:
PR branch: (with Resource propagation at startup):
PR branch: (with Resource propagation + avoid LogData clone for single processor)
|
https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-sdk/src/logs/log_emitter.rs#L210 This clone needs some rethinking. Currently, every processor, even the ones that can stream event directly will have to suffer from this clone cost. I think we should just pass ref to logrecord to processors, and if a processor wants to batch it, then it can do the clone and pay the cost. This should give a significant cost reduction.
Originally posted by @cijothomas in #1649 (comment)
The text was updated successfully, but these errors were encountered: