Hi, I’m using FluentD to collect native sensor data via UDP from my Juniper routers. I have noticed that for certain minutes I’m not getting any data points from the router and was hoping back-filling the data using linear interpolation would help in normalizing the spikes that I see in Grafana.
However, when I run the following query to backfill data, it doesn’t appear to backfill.
SELECT max(value) AS outOctets INTO “labjuniperfluentdstats”.“autogen”.“interfaces” FROM “labjuniperfluentd”.“autogen”.“interfaces” WHERE “type” =~ /egress_stats.if_octets/ and time >= now() - 48h GROUP BY time(5m), * fill(linear)
You can see from this query that I’m still missing data from certain time intervals:
select outOctets from interfaces where device =~ /J02/ and “interface_parent” =~ /ae23$/ and type =~ /egress_stats.if_octets/ and time >= 1569054148288ms and time <= 1569055527331ms
03: name: interfaces
04: time---------------------------outOctets
05: ---- ---------
06: 2019-09-21T08:23:00Z 2579056525021741
07: 2019-09-21T08:24:00Z 2579755701888115
08: 2019-09-21T08:25:00Z 2580291446369919
09: 2019-09-21T08:26:00Z 2580803798672772
10: 2019-09-21T08:27:00Z 0 ========> did not receive egress_stats.if_octets from BBRJ02
11: 2019-09-21T08:28:00Z 0 ========> did not receive egress_stats.if_octets from BBRJ02
12: 2019-09-21T08:29:00Z 2582900540676924
13: 2019-09-21T08:30:00Z 0
14: 2019-09-21T08:31:00Z 2584647728673427
15: 2019-09-21T08:32:00Z 2584997174885843
16: 2019-09-21T08:33:00Z 0 ========> did not receive egress_stats.if_octets from BBRJ02
17: 2019-09-21T08:34:00Z 2586744686384336
18: 2019-09-21T08:35:00Z 2587280293418793
19: 2019-09-21T08:36:00Z 0 ========> did not receive egress_stats.if_octets from BBRJ02
20: 2019-09-21T08:37:00Z 2588841085529656
Then when I run the below query to calculate the BPS I get a huge spike of 345 Tbps (which I’m definitely not generating)
select non_negative_difference(last(“outOctets”)) / 60*8 from interfaces where device =~ /J02/ and “interface_parent” =~ /ae23$/ and type =~ /egress_stats.if_octets/ and time >= 1569054148288ms and time <= 1569055527331ms group by time(1m)
32: name: interfaces
33: time---------------------------non_negative_difference
34: ---- -----------------------
35: 2019-09-21T08:23:00Z 68306363141.333336
36: 2019-09-21T08:24:00Z 93223582183.2 ========> 93 Gbps (line 6 - line 5 / 60 *8) GOOD VALUE
37: 2019-09-21T08:25:00Z 71432597573.86667 ========> 71 Gbps (line 7 - line 6 / 60 *8) GOOD VALUE
38: 2019-09-21T08:26:00Z 68313640380.4 ========> 68 Gbps (line 8 - line 7 / 60 *8) GOOD VALUE
39: 2019-09-21T08:28:00Z 0
40: 2019-09-21T08:29:00Z 344386738756923.2 ========> 345 Tbps (line 11 - line 10 / 60 *8) BAD VALUE
41: 2019-09-21T08:31:00Z 344619697156456.94 ========> 345 Tbps (line 13 - line 12 / 60 *8) BAD VALUE
42: 2019-09-21T08:32:00Z 46592828322.13333
43: 2019-09-21T08:34:00Z 344899291517911.44 ========> 345 Tbps (line 16 - line 15 / 60 *8) BAD VALUE
44: 2019-09-21T08:35:00Z 71414271260.93333
45: 2019-09-21T08:37:00Z 345178811403954.1 ========> 345 Tbps (line 19 - line 18 / 60 *8) BAD VALUE
46: 2019-09-21T08:38:00Z 93160998584
47: 2019-09-21T08:40:00Z 345458377140870.56
Should InfluxDB be able to fill(linear) the missing data points?
Thank you,
Mohsin