I’m running an InfluxDB 1.8.9 OSS in a Docker container on an SYNOLOGY DS1522+ NAS using DSM 7.1.
I found very high disk write rates of abt. 2MBytes/s when inserting measurements with two tags and one value.
I’m using the “influxdb out” node in Node-RED v2.2.3 to write with a rate of about 5 measurements per second.
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
sata2 141.00 0.00 2325.00 0.00 0 2325 0
Logging ist turned off via
- INFLUXDB_REPORTING_DISABLED = true
- INFLUXDB_HTTP_LOG_ENABLED = false
- INFLUXDB_QUERY_LOG_ENABLED = false
And I’ve set INFLUXDB_DATA_WAL_FSYNC_DELAY to 1s
Any ideas what I’m doing wrong and how I can reduce disk io?
Thanks in advance
Thats very odd. How many points are in each measurement?
Thanks for coming back to my question!
Sorry, I’m not quite up to speed with the terms - exactly one “datapoint” is always inserted. The measurement has two numerical tags and the numerical measurement value.
The conf file is more or less empty:
dir = "/var/lib/influxdb/meta"
dir = "/var/lib/influxdb/data"
engine = "tsm1"
wal-dir = "/var/lib/influxdb/wal"
Did I miss something important?
is it possible that these setting can cause this high workload on each insert?
CREATE RETENTION POLICY "rp_1w" ON "high_volume" DURATION 1w REPLICATION 1 DEFAULT
CREATE RETENTION POLICY "rp_forever" ON "high_volume" DURATION INF REPLICATION 1
CREATE CONTINUOUS QUERY "cq_5min" ON "high_volume" BEGIN SELECT mean(value) AS mean_value, max(value) AS max_value, min(value) AS min_value INTO "high_volume"."rp_forever"."sensors" FROM "high_volume"."rp_1w"."sensors" GROUP BY time(5m), * END
Hm, I’ve removed the CQ and still same load - any more ideas?
It is getting more strange - on a RaspberryPi docker image with InfluxDB 1.8.4 with the same settings and influxdb.conf I get IO rates with 50kBytes/s