Telegraf with logparser timeout insert to influxdb

Hi @alvianno,
Do you have some more information ?
What is the state of your database ?What is your question ?
What is the link with logparser?

Best regards

my database no problem, telegraf read file csv with plugin logparser, but if many insert to database i look log error in log telegraf is client timeout

Hi @alvianno , can you share the agent section of your Telegraf configuration ?

Telegraf Configuration

Telegraf is entirely plugin driven. All metrics are gathered from the

declared inputs, and sent to the declared outputs.

Plugins must be declared in here to be active.

To deactivate a plugin, comment out the name and any variables.

Use ‘telegraf -config telegraf.conf -test’ to see what metrics a config

file would generate.

Environment variables can be used anywhere in this config file, simply prepend

them with $. For strings the variable must be within quotes (ie, “$STR_VAR”),

for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)

Global tags can be specified here in key=“value” format.

[global_tags]

dc = “us-east-1” # will tag all metrics with dc=us-east-1

rack = “1a”

Environment variables can be used as tags, and throughout the config file

user = “$USER”

Configuration for telegraf agent

[agent]

Default data collection interval for all inputs

interval = “10s”

Rounds collection interval to ‘interval’

ie, if interval=“10s” then always collect on :00, :10, :20, etc.

round_interval = true

Telegraf will send metrics to outputs in batches of at most

metric_batch_size metrics.

This controls the size of writes that Telegraf sends to output plugins.

metric_batch_size = 10000

For failed writes, telegraf will cache metric_buffer_limit metrics for each

output, and will flush this buffer on a successful write. Oldest metrics

are dropped first when this buffer fills.

This buffer only fills when writes fail to output plugin(s).

metric_buffer_limit = 100000

Collection jitter is used to jitter the collection by a random amount.

Each plugin will sleep for a random time within jitter before collecting.

This can be used to avoid many plugins querying things like sysfs at the

same time, which can have a measurable effect on the system.

collection_jitter = “0s”

Default flushing interval for all outputs. Maximum flush_interval will be

flush_interval + flush_jitter

flush_interval = “10s”

Jitter the flush interval by a random amount. This is primarily to avoid

large write spikes for users running a large number of telegraf instances.

ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s

flush_jitter = “0s”

By default or when set to “0s”, precision will be set to the same

timestamp order as the collection interval, with the maximum being 1s.

ie, when interval = “10s”, precision will be “1s”

when interval = “250ms”, precision will be “1ms”

Precision will NOT be used for service inputs. It is up to each individual

service input to set the timestamp at the appropriate precision.

Valid time units are “ns”, “us” (or “µs”), “ms”, “s”.

precision = “ns”

Logging configuration:

Run telegraf with debug log messages.

debug = false

Run telegraf in quiet mode (error log messages only).

quiet = false

Specify the log file name. The empty string means to log to stderr.

logfile = “”

Override default hostname, if empty use os.Hostname()

hostname = “”

If set to true, do no set the “host” tag in the telegraf agent.

omit_hostname = false

###############################################################################

OUTPUT PLUGINS

###############################################################################

Configuration for sending metrics to InfluxDB

[[outputs.influxdb]]

The full HTTP or UDP URL for your InfluxDB instance.

Multiple URLs can be specified for a single cluster, only ONE of the

urls will be written to each interval.

urls = [“unix:///var/run/influxdb.sock”]

urls = [“udp://127.0.0.1:8089”]

urls = [“http://172.16.11.124:8086”]

The target database for metrics; will be created as needed.

For UDP url endpoint database needs to be configured on server side.

database = “LOG_KPEI”

The value of this tag will be used to determine the database. If this

tag is not set the ‘database’ option is used as the default.

database_tag = “”

If true, no CREATE DATABASE queries will be sent. Set to true when using

Telegraf with a user without permissions to create databases or when the

database already exists.

skip_database_creation = false

Name of existing retention policy to write to. Empty string writes to

the default retention policy. Only takes effect when using HTTP.

retention_policy = “autogen”

Write consistency (clusters only), can be: “any”, “one”, “quorum”, “all”.

Only takes effect when using HTTP.

write_consistency = “any”

Timeout for HTTP messages.

timeout = “60s”

HTTP Basic Auth

username = “telegraf”

password = “metricsmetricsmetricsmetrics”

HTTP User-Agent

user_agent = “telegraf”

UDP payload size is the maximum packet size to send.

udp_payload = “512B”

Optional TLS Config for use on HTTP connections.

tls_ca = “/etc/telegraf/ca.pem”

tls_cert = “/etc/telegraf/cert.pem”

tls_key = “/etc/telegraf/key.pem”

Use TLS but skip chain & host verification

insecure_skip_verify = false

HTTP Proxy override, if unset values the standard proxy environment

variables are consulted to determine which proxy, if any, should be used.

http_proxy = “http://corporate.proxy:3128

Additional HTTP headers

http_headers = {“X-Special-Header” = “Special-Value”}

HTTP Content-Encoding for write request body, can be set to “gzip” to

compress body or “identity” to apply no encoding.

content_encoding = “identity”

When true, Telegraf will output unsigned integers as unsigned values,

i.e.: “42u”. You will need a version of InfluxDB supporting unsigned

integer values. Enabling this option will result in field type errors if

existing data has been written.

influx_uint_support = false

###################### INPUT ########################################

#########################################################################

[[inputs.logparser]]

Log files to parse.

These accept standard unix glob matching rules, but with the addition of

** as a “super asterisk”. ie:

/var/log/**.log → recursively find all .log files in /var/log

/var/log//.log → find all .log files with a parent dir in /var/log

/var/log/apache.log → only tail the apache log file

files = [“/media/logs/crappie21/mongodb/mongod.log”]
from_beginning = true

Method used to watch for file updates. Can be either “inotify” or “poll”.

watch_method = “inotify”

Parse logstash-style “grok” patterns:

Telegraf built-in parsing patterns: https://goo.gl/dkay10

[inputs.logparser.grok]
patterns = [‘’‘%{TIMESTAMP_ISO8601:timestamp:ts-“2006-01-02T15:04:05.999+0700”}\s%{WORD:Log_Level:tag}\s%{NOTSPACE:Log_Type:tag}\s+[%{ERROR_CODE:Error_Code:tag}(%{NUMBER:Error_Code_Number:drop}?)]\s%{GREEDYDATA:Message:string}?’‘’]

unique_timestamp = “disabled”

measurement = "MongoDB"
custom_pattern_files = []
custom_patterns = '''ERROR_CODE [a-zA-Z-]*
'''
timezone = "local"

Hi ,

does it always take around 3 s to write 100.000 measurements ?
you will have to find out if something is slowing down your system …

The log is from during the night ?
Do you have the same problem during the day ?
Does the problem occur always around the same time ?

 for the first error : output "influxdb" did not complete within its flush interval

you could increase the flush_interval … .

 for the second error : Timeout exceeded while awaiting headers

can you check what happens on the database around that time ? Heavy load ? CQ’s ?