Need help in optimizing memory usage

We have about 40G memory and influxdb is using up all. Will converting inmem to tsi overcome the problem?
we have about 1.3T size data present.

Using the tsi index should help but it won’t make a miracle, memory usage mostly depends on series cardinality, and therefore on your data schema.

in general my suggestion is: if you don’t need something as a tag, don’t store it as a tag, epecially unique IDs or other unique “row” identifiers (with that I mean the equivalent of a primary key for a SQL database) as those will make your cardinality explode.
in case you really need something like that as a tag, you might want to have a short RP in order to keep the values somehow limited.

You can have a look at the hardware sizing guidelines to understand if what you actually have is appropriate.

1 Like

Giovanni_Luisotto,

Thank you for the reply.

few more questions:

  1. So is it possible to find which DB uses more tags so we that we can set a lower RP to that one
  2. If we convert inmem to tsi will it be applied to exisitng DBs/data as well and Would it cause data corruptions

If we convert inmem to tsi will it be applied to exisitng DBs/data as well and Would it cause data corruptions

The conversion has to be performed with the DB offline, here is the procedure to follow.
I’ve never had data corruption, but sometimes I got errors and had to repeat it for specific shards. (shards will be converted one by one, you will get an error one fails and the procedure will continue, if you track all the errors you can repeat the conversion only for those specific shards).
Note that the DB can be started even with mixed shards, but you won’t be able to perform some operations like a DROP MEASUREMENT (and I’m not sure what else)

So is it possible to find which DB uses more tags so we that we can set a lower RP to that one

For this you can use SHOW SERIES CARDINALITY and its SHOW SERIES EXACT CARDINALITY
here are the docs
note that the RP duration might not affect your cardinality at all.

Giovanni_Luisotto,

I have found out few Db’s with the highest cardinality (more than 1000000), so you have any suggestions to lower the cardinality for those DB’s so that we could free up some memory.

Thanks

This is about schema design, therefore saving data as tags or fields.

  • is a schema change viable?
  • which requirements do you have? (when querying data but not only)
  • do you need all the data you are fetching?

once you sort out those questions you can have a look around the forum as there are plenty of solutions about the how to.

as a note, making a schema change (like reducing the number of tags) will actually need more memory (at least temporarily) because a whole new “index” will be created fo the serie, and both indexes will exist until the old series goes out of RP or is deleted.

1 Like