-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
output/cloudv2: Binary-based payload #2963
Conversation
a735438
to
dd7687d
Compare
Codecov Report
@@ Coverage Diff @@
## master #2963 +/- ##
==========================================
- Coverage 75.21% 73.51% -1.70%
==========================================
Files 236 238 +2
Lines 17722 18220 +498
==========================================
+ Hits 13330 13395 +65
- Misses 3533 3955 +422
- Partials 859 870 +11
Flags with carried forward coverage won't be shown. Click here to find out more.
|
60d4393
to
fdc5307
Compare
@na-- @olegbespalov I added the aggregation proposal also, it shouldn't be optimal but it should be a good first step to achieve a reasonable rate. We can iterate on it after we will get some benchmarks (I don't plan them for this PR). I would get the reviews for it before I do the refactor asked in #2963 (comment) so I can address it all at once. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done a first round, will do the other one a bit later
fdc5307
to
bfff8af
Compare
d7df637
to
34d36f7
Compare
34d36f7
to
580f02b
Compare
580f02b
to
4c09c5e
Compare
ae23d28
to
8405799
Compare
d7d14b5
to
4e8347e
Compare
4e8347e
to
76b11c0
Compare
Rebased after the aggregation merge |
return nil | ||
} | ||
|
||
const payloadSizeLimit = 100 * 1024 // 100 KiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we just break them in parts then? 100kb doesn't soudn that big. Especially before compression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would you do that? Here the limit is related to the protobuf, how do you chunk it without breaking the metric set message? I'm not aware of Protobuf supporting a streaming/chunked version. Did I miss it?
This should be an extreme check, if we need to chunk, then I think we should do that before calling the metric's client, so in its caller. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use https://pkg.go.dev/google.golang.org/protobuf/proto#Size to find out how big it will be before serializing and then splitting if necessary.
And I guess a method like Push can know that protobuf will be used and cut it in pieces when necessary.
We can probably leave that for after we merge everything. But I would like to know how many metrics are 100kb before we actually make this the default. As dropping your metrics because they were 101kb sounds like terrible experience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! But we should test how big is 100kb encoded metrics before we actually make this the default way to push metrics to the cloud.
Yes, I already have it on my list. I add it also to the worklog in #2954 as a reminder. |
It adds the HTTP client for flushing the metric to the remote cloud service and the new Protobuf-based required payload.
Updates #2954