You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consider updating this line from: RUN echo "${SPEEDTEST_CRON_SCHEDULE} root /usr/bin/speedtest --accept-license --accept-gdpr -s ${SPEEDTEST_SERVER} -f json > \$(mktemp -u -p /var/log/speedtest XXXXXX).json" > /etc/cron.d/speedtest
...to: RUN echo "${SPEEDTEST_CRON_SCHEDULE} root /usr/bin/speedtest --accept-license --accept-gdpr -s ${SPEEDTEST_SERVER} -f json > /var/log/speedtest/results.json" > /etc/cron.d/speedtest
There is really no need to dump the results into unique json file every time. From long-term perspective and in conjuction with the speedtest being run every 1 minute, you will end up having 1440 files daily, 525k files in one year !!!!! That's crazy overkill and waste of resources.
Especially since telegraf agent is set to read all the files every 10s hence it reads THE SAME DATA of all thousands and thousand of files 6 times every one minute !!!!
Surely you can update telegraf agent interval to "1m" but that won't fix the inefficient usage of filesystem due to gazillion of files created by the speedtest and then being read by telegraf.
In the end, here below is the best of the best approaches.....
By doing this, telegraf agent executes the speedtest binary which dumps the json into stdout and telegraf reads and processed stdout and writes into influxdb.
Another advantage is also that you end up having just single container with both telegraf and speedtest in one nice package :)
speedtest/Dockerfile:
Consider updating this line from:
RUN echo "${SPEEDTEST_CRON_SCHEDULE} root /usr/bin/speedtest --accept-license --accept-gdpr -s ${SPEEDTEST_SERVER} -f json > \$(mktemp -u -p /var/log/speedtest XXXXXX).json" > /etc/cron.d/speedtest
...to:
RUN echo "${SPEEDTEST_CRON_SCHEDULE} root /usr/bin/speedtest --accept-license --accept-gdpr -s ${SPEEDTEST_SERVER} -f json > /var/log/speedtest/results.json" > /etc/cron.d/speedtest
There is really no need to dump the results into unique json file every time. From long-term perspective and in conjuction with the speedtest being run every 1 minute, you will end up having 1440 files daily, 525k files in one year !!!!! That's crazy overkill and waste of resources.
Especially since telegraf agent is set to read all the files every 10s hence it reads THE SAME DATA of all thousands and thousand of files 6 times every one minute !!!!
Surely you can update telegraf agent interval to "1m" but that won't fix the inefficient usage of filesystem due to gazillion of files created by the speedtest and then being read by telegraf.
In the end, here below is the best of the best approaches.....
By doing this, telegraf agent executes the speedtest binary which dumps the json into stdout and telegraf reads and processed stdout and writes into influxdb.
Another advantage is also that you end up having just single container with both telegraf and speedtest in one nice package :)
The text was updated successfully, but these errors were encountered: