Parse nginx accounting module log using GROK

telegraf

#1

Now, I am trying to parse log generated by nginx accounting module (https://www.nginx.com/resources/wiki/modules/traffic_accounting/). And then insert the data into influxDB

Sample data

telegraf conf

In this example, I’d like to parse “statuses” data as key:value. The influx data looks like below

Processor’s logfmt does not work. Is there any other method ? Please share the config sample.


#2

I don’t think it is possible with this format, it can’t be done with grok because of the dynamic number of fields, and logfmt does not support these delimiters. Any chance you could modify the format of the statuses?


#3

Thank for your reply.
I found another workaround using processors.strings.replace provided from 1.9.x. This is RC version.
There is another problem. When I use processors.strings.replace, it doesn’t work basicstats aggregation plugin.


#4

I’m not sure if this will fix it, but anytime you have more than one processor you will want to use the order option to control the order they are applied:

[[processors.strings]]
  order = 1
  # other options

[[processors.parser]]
  order = 2
  # other options

#5

Processors plugins is no problem. Basicstats Aggregators plugin does not work as I thought.

Whenever appending one line to log file, I expect aggregator plugin works. But sometime it works, and sometimes not works.
Is there any limitation ? For example, minimum number of line for aggregating the data ?


#6

It works on the period setting, metrics should be emitted every 30s in your example (so long as at least one metric was collected during this time).


#7

Sorry, you’re right. There is some mistake on my test.
One more thing, can I change measurement name of metric modified by basicstats aggregators plugin?


#8

I believe this should work

[[aggregators.basicstats]]
  name_override = "howdy"
  period = "30s"
  drop_original = false
  stats = ["sum"]

#9

It works. Many Thanks.


#10

Hello @daniel

I have a question about it.

I applied the settings to what you told me.
Well, after 100 to 150 minutes of operation, the InfluxDB is not updated.
What is the expected cause?


  • my version
    Telegraf: 1.9.0
    InfluxDB: 0.11.1

  • telegraf configure
    [[aggregators.basicstats]]
    period = “30s”
    drop_original = false
    name_override = “nginx_stats_sum”
    stats = [“sum”]

Thank you.


#11

Without reading the backlog, what is the reason you are still on influx 0.11.1? The current release is 1.7.2


#12

This is because other collections work well with InfluxDB 0.11.1.
I do not think this is a problem with InfluxDB.
Telegraf is not updating nginx_stats_sum data.
However, as I said, I will check again after changing to the latest version of InfluxDB.
Have a nice day.


#13

I do not know why, but …
I confirmed that it operates normally in Telegraf 1.9.1.
Thank you.