Parse nginx accounting module log using GROK

Now, I am trying to parse log generated by nginx accounting module (Traffic Accounting | NGINX). And then insert the data into influxDB

Sample data

telegraf conf

In this example, I’d like to parse “statuses” data as key:value. The influx data looks like below

Processor’s logfmt does not work. Is there any other method ? Please share the config sample.

I don’t think it is possible with this format, it can’t be done with grok because of the dynamic number of fields, and logfmt does not support these delimiters. Any chance you could modify the format of the statuses?

Thank for your reply.
I found another workaround using processors.strings.replace provided from 1.9.x. This is RC version.
There is another problem. When I use processors.strings.replace, it doesn’t work basicstats aggregation plugin.

I’m not sure if this will fix it, but anytime you have more than one processor you will want to use the order option to control the order they are applied:

[[processors.strings]]
  order = 1
  # other options

[[processors.parser]]
  order = 2
  # other options

Processors plugins is no problem. Basicstats Aggregators plugin does not work as I thought.

Whenever appending one line to log file, I expect aggregator plugin works. But sometime it works, and sometimes not works.
Is there any limitation ? For example, minimum number of line for aggregating the data ?

It works on the period setting, metrics should be emitted every 30s in your example (so long as at least one metric was collected during this time).

Sorry, you’re right. There is some mistake on my test.
One more thing, can I change measurement name of metric modified by basicstats aggregators plugin?

I believe this should work

[[aggregators.basicstats]]
  name_override = "howdy"
  period = "30s"
  drop_original = false
  stats = ["sum"]

It works. Many Thanks.

Hello @daniel

I have a question about it.

I applied the settings to what you told me.
Well, after 100 to 150 minutes of operation, the InfluxDB is not updated.
What is the expected cause?


  • my version
    Telegraf: 1.9.0
    InfluxDB: 0.11.1

  • telegraf configure
    [[aggregators.basicstats]]
    period = “30s”
    drop_original = false
    name_override = “nginx_stats_sum”
    stats = [“sum”]

Thank you.

Without reading the backlog, what is the reason you are still on influx 0.11.1? The current release is 1.7.2

This is because other collections work well with InfluxDB 0.11.1.
I do not think this is a problem with InfluxDB.
Telegraf is not updating nginx_stats_sum data.
However, as I said, I will check again after changing to the latest version of InfluxDB.
Have a nice day.

I do not know why, but …
I confirmed that it operates normally in Telegraf 1.9.1.
Thank you.