High memory usage and constant CPU usage after data re-insert

We have a database storing sensor data, it has been running for 4 months now. We have around 6000 sensors, but I think actually only 2000 are active. They send data in 1 minute intervals. In total we have maybe 10-15GB of data.

We have been using GitHub - node-influx/node-influx: 📈 The InfluxDB Client for Node.js and Browsers on our server which transforms the data and stores it to the database.

Originally we used a epoch timestamp in milliseconds for the timestamp, but for some reason that did not work with continuous queries - somehow the data was stored incorrectly and we could query the value using epoch timestamps but not dates and all the records had epoch-0 timestamp when queried.

For this reason we re-inserted it into the database over couple of days, replacing the epoch timestamp with a Date object. The library does something with the date object, most likely turns it into a nanosecond timestamp and maybe something else. Now the continuous queries work, and all the other query formats works as well.

The problem is, when before the database was using next to nothing in memory and CPU only when using heavy queries, now the database is constantly using one core 100% and quickly ramps up to 12GB of memory, after which the memory usage slowly creeps up and can choke the whole server when swap steps into the picture.

During startup the ramp-up is so heavy that if we have 4 cores and 28GB memory it is not enough, and the server will start swapping. With 8 cores and 32GB memory we are able to run the system.

Before the re-insertion, we were perfectly fine with 4 cores and 16GB memory.

We are using the tsi1 index and our cardinality is around 15k. There are 6 databases, the original and 5 aggregation databases that are populated using continuous queries.

Any idea why this might be happening, and what we could do about it?

1 Like