-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make special handling of check
metrics in StatsD output configurable
#2470
Comments
I am running K6 through teamcity on docker and sending data to Datadog using statsd. There are 2 issues with
Datadog support team ticket 954788 was opened by me. They told me to open a ticket with K6 support. Please fix issue below. Here is what Datadog team found After looking into this further regarding the "http_req_failed" metric, please see our findings below: k6 documentation does state that http_req_failed is a rate -> (A metric that tracks the percentage of added values that are non-zero.). |
Thanks for your feedback @papatelst 🙇
The thing is the number isn't random - it is the number of possitive If it is not reported it means there were zero failures. I guess you can correlate this to
This seems to be some private ticket that I can not access . Don't know if we need to or we should either 🤷 It seems to me like we need to implement the I also think that we should make this the default, but this will require it to be implemented first and at least a release of warnings around it. So maybe let's focus on the implementation part first. |
@mstoykov Here is an example. There is also another issue where |
You are misinterpreting the results it is 100 failed and 67 not failed so 100/167. Unfortunately this is pretty common as having double negatives is not a great idea, and already has an issue with more info #2306. The 1.09 is confusing to me as well, and I can only guess that given that it is an average (btw never use averages ;) ) it is the average of the value over some period. So Datadog received 1, 10, 0, 0, 0, 3 and averaged it and got that number. Maybe ask them how they aggregate - at this point in time k6 does not aggregate when outputting to the statsd output which the same one Datadog uses. You should be able to see over what period that
statsd the protocol , which is what Datadog receives does not support percentages. The way to do them is basically what k6 does specifically for the Which is basically what this issue is about - adding the ability for the other internal k6 Rate metrics to be send to Datadog in this format. I also just found this community forum thread where if you stop the Datadog agent too fast it doesn't send the last few metrics. I also ended up actually doing the dashboard and trying to populate it, and it looks something like
As you can also see if I don't average over the metrics `http_req_failed is a whole integer (as it should be). They are response on the Also, that avg is over the time the Datadog agent aggregates/buffers before sending to Datadog (from what I understand) - again k6 does not aggregate samples at this point in time on the k6 side for the statsd output. But the Datadog agent definitely does. So the I do remember there were some problems with high volumes and docker, so maybe if you are using docker or not you can decrease Please if you want to report different issues - make new issues. This one is about how Rate is handled when we need to send it to statsd/datadog. Discussing issues related to it is likely to only muddy this issue and make it harder to discuss the particular thing at hand. It also makes it harder to keep track of what we have answered/looked into and not. Again thank you for reporting this 🙇 |
@mstoykov 1.09 is not an average of multiple runs. It is the single value of a single run. In my datadog UI snapshot you can see Should I open a separate github issue for "There is also another issue where k6.http_req_duration.avg in datadog UI is close to, but never the same as K6 build log. K6 is not sending certain metrics to Datadog." |
NO - it clearly is the average of the values - k6 does not calculate the average it just sends the values. (specifically left one of them as average to see the difference) The If you notice all the values in the max/min/sum/values for not the
Yes - let's open a new issue |
Per @olegbespalov and @javaducky, this issue should probably be part of the StatsD project. Feel free to transfer it here: |
Closing in favor of LeonAdato/xk6-output-statsd#28 |
The
statsd
output (also used for NewRelic and DataDog, among others) currently has special handling ofcheck
metric samples:k6/output/statsd/output.go
Lines 79 to 87 in 59a8883
k6/output/statsd/output.go
Lines 94 to 100 in 59a8883
It transforms the metric names to
check.<check-name>.pass
orcheck.<check-name>.fail
. I don't remember why this was done, probably because StatsD doesn't seem to support anything like ourRate
metric (so we transform it to 2Count
ones here) and because we didn't want the checks to be grouped together (sort of a localized #1321 workaround). In any case, it definitely needs to be better documented (grafana/k6-docs#603) and maybe ideally also configurable (this issue) 🤔For example, we can add a new
K6_STATSD_EXPAND_RATE_ON_TAGS
option to the statsd output, with the default value ofcheck
. Then:K6_STATSD_EXPAND_RATE_ON_TAGS
to an empty string, the special handling of checks will be disabledK6_STATSD_EXPAND_RATE_ON_TAGS
to something else, otherRate
metrics could be expanded in a similar manner as wellConnected issues and forum threads:
checks
metric. #1403The text was updated successfully, but these errors were encountered: