The Data explorer seems to be not working #26030
Replies: 1 comment
-
sloved: time range issue was caused by incorrect timestamp parsing in Telegraf, leading to shifted or missing timestamps in InfluxDB. I fixed this by correctly handling milliseconds and time zone offsets, ensuring that timestamps are stored properly in UTC. as telegraf was handling data inconsistently my data was of 2024-02-07 but it gave results for 2024-01-07 but now will explore other parts of it. t |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
hiie,
I am injecting 1000rows csv file via telegraf, here are my file and screenshots that I am trying,
Configuration for telegraf agent
[agent]
Default data collection interval for all inputs
interval = "5s"
Rounds collection interval to 'interval'
ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
Telegraf will send metrics to outputs in batches of at most
metric_batch_size metrics.
This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 10000
Maximum number of unwritten metrics per output. Increasing this value
allows for longer periods of output downtime without dropping metrics at the
cost of higher maximum memory usage.
metric_buffer_limit = 10000
Collection jitter is used to jitter the collection by a random amount.
Each plugin will sleep for a random time within jitter before collecting.
This can be used to avoid many plugins querying things like sysfs at the
same time, which can have a measurable effect on the system.
#collection_jitter = "0s"
Default flushing interval for all outputs. Maximum flush_interval will be
flush_interval + flush_jitter
flush_interval = "5s"
Jitter the flush interval by a random amount. This is primarily to avoid
large write spikes for users running a large number of telegraf instances.
ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
#flush_jitter = "0s"
By default or when set to "0s", precision will be set to the same
timestamp order as the collection interval, with the maximum being 1s.
ie, when interval = "10s", precision will be "1s"
when interval = "250ms", precision will be "1ms"
Precision will NOT be used for service inputs. It is up to each individual
service input to set the timestamp at the appropriate precision.
Valid time units are "ns", "us" (or "µs"), "ms", "s".
#precision = ""
Log at debug level.
debug = false
Log only error level messages.
quiet = false
Log target controls the destination for logs and can be one of "file",
"stderr" or, on Windows, "eventlog". When set to "file", the output file
is determined by the "logfile" setting.
logtarget = "file"
Name of the file to be logged to when using the "file" logtarget. If set to
the empty string then logs are written to stderr.
logfile = ""
The logfile will be rotated after the time interval specified. When set
to 0 no time based rotation is performed. Logs are rotated only when
written to, if there is no log activity rotation may be delayed.
logfile_rotation_interval = "0d"
The logfile will be rotated when it becomes larger than the specified
size. When set to 0 no size based rotation is performed.
logfile_rotation_max_size = "0MB"
Maximum number of rotated archives to keep, any older logs are deleted.
If set to -1, no archives are removed.
logfile_rotation_max_archives = 5
Pick a timezone to use when logging or type 'local' for local time.
Example: America/Chicago
log_with_timezone = ""
Override default hostname, if empty use os.Hostname()
hostname = ""
If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = false
[[outputs.influxdb_v2]]
The URLs of the InfluxDB cluster nodes.
Multiple URLs can be specified for a single cluster, only ONE of the
urls will be written to each interval.
ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
urls = ["http://189.56.890.220:8086"]
Token for authentication.
#token = "$INFLUX_TOKEN"
token="OpsDvCTA5SONk4_5tlCVniviDXiU2SW4PNlNTD1COBvawWQhR-etNrQpXUajIJqLbf7aMuiIvDEYibWycXiEOA=="
Organization is the name of the organization you wish to write to; must exist.
organization = "University of Koblenz"
Destination bucket to write into.
bucket = "datapoints"
The value of this tag will be used to determine the bucket. If this
tag is not set the 'bucket' option is used as the default.
bucket_tag = ""
If true, the bucket tag will not be added to the metric.
exclude_bucket_tag = false
Timeout for HTTP messages.
timeout = "5s"
Additional HTTP headers
http_headers = {"X-Special-Header" = "Special-Value"}
HTTP Proxy override, if unset values the standard proxy environment
variables are consulted to determine which proxy, if any, should be used.
http_proxy = "http://corporate.proxy:3128"
HTTP User-Agent
user_agent = "telegraf"
Content-Encoding for write request body, can be set to "gzip" to
compress body or "identity" to apply no encoding.
content_encoding = "gzip"
Enable or disable uint support for writing uints influxdb 2.0.
influx_uint_support = false
Optional TLS Config for use on HTTP connections.
tls_ca = "/etc/telegraf/ca.pem"
tls_cert = "/etc/telegraf/cert.pem"
tls_key = "/etc/telegraf/key.pem"
Use TLS but skip chain & host verification
insecure_skip_verify = false
Parse a complete file each interval
Files to parse each interval. Accept standard unix glob matching rules,
as well as ** to match recursive files and directories.
#[[inputs.file]]
#files = ["/etc/telegraf/fixed_data.csv"]
#data_format = "csv"
#csv_header_row_count = 1
#csv_column_names = ["timestamp", "value", "extra_column", "category"]
#csv_skip_rows = 0
#csv_timestamp_column = "timestamp"
#csv_column_types = ["timestamp", "float", "string", "int"]
#csv_timestamp_format="2006-01-02 15:04:05+01:00"
[[inputs.file]]
files = ["/etc/telegraf/fixed_data.csv"]
#from_beginning = true
data_format = "csv"
csv_header_row_count = 1
csv_delimiter = "," # Ensure it's using commas as delimiters
csv_column_names = ["timestamp", "value", "extra_column", "category"]
csv_skip_rows = 0 # Make sure it does not skip useful rows
csv_timestamp_column = "timestamp"
csv_column_types = ["timestamp", "float", "string", "int"]
csv_timestamp_format="2006-01-02 15:04:05+01:00"
#csv_timestamp_format="2006-01-02 15:04:05+01:00"
#csv_timestamp_format ="2006-01-02 15:04:05+01"
#csv_timestamp_format="2024-02-07T23:28:34+01:00"
#csv_timestamp_format = "2006-01-02 15:04:05+01:00"
#2006-01-02T15:04:05Z07:00
#csv_timestamp_format = "2006-01-02 15:04:05+01"
#csv_timestamp_format = "2006-01-02 15:04:05.000-07:00"
#csv_timestamp_format = "2006-01-02 15:04:05.000-07:00"
#csv_timestamp_column = "collectionBeginTime"
#csv_timestamp_format = "2024-02-07 12:55:52.565+01:00"
#csv_timestamp_format="2006-01-02 15:04:05Z07"
Adjust based on your format
Character encoding to use when interpreting the file contents. Invalid
characters are replaced using the unicode replacement character. When set
to the empty string the data is not decoded to text.
ex: character_encoding = "utf-8"
character_encoding = "utf-16le"
character_encoding = "utf-16be"
character_encoding = ""
character_encoding = ""
Data format to consume.
Each data format has its own unique set of configuration options, read
more about them here:
https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
##data_format = "influx"
Name a tag containing the name of the file the data was parsed from. Leave empty
to disable. Cautious when file name variation is high, this can increase the cardinality
significantly. Read more about cardinality here:
https://docs.influxdata.com/influxdb/cloud/reference/glossary/#series-cardinality
file_tag = ""
Any help would be of much help :)
Beta Was this translation helpful? Give feedback.
All reactions