Some questions regarding TSM > TSI1 shard conversion

Hi,

Planning to convert to TSI1. I have two InfluxDB instances at 1.7.3, one was initially installed with 1.5.0 and the other with 1.6.0). Always using defaults, both on top of SSD’s.

I have some questions:

  1. Given a server with comfortable RAM size, will it be better performance-wise to remain in TSM?
  2. After I change the configuration to use TSI1 and restart InfluxDB, do I need to wait for current TSM shards to fill up before launching the conversion? I’m using 1-day shard duration on all databases.
  3. Or it will immediately start writing new TSI1 shards? If so, when the old shard is converted, I will have two shards for two periods of the same day?
  4. Can the conversion utility be launched with InfluxDB online, or it needs to be down?
  5. Do I need to remove any old indices directories?

Thanks!

TSM is how we store the data, so there’s no compromise between TSI and TSM.

The indexing options are “inmem” and TSI. inmem is simple, it just reads all data in all TSM files when the process starts, stores the index in memory.

TSI is an on-disk index, so the process starts much faster because it doesn’t have to read every TSM file. TSI also uses less memory, and has other trickery that makes it faster than inmem in most cases.

1 - TSI is usually faster than inmem
2,3,4 - You’ll stop InfluxDB, change the config, create the TSI indexes, and restart InfluxDB
5 - Don’t delete any existing on-disk data. After the migration, you’ll see where the TSI index files are; everything else is either TSM, or series key indexes (which are automatically rebuilt if they are missing, but without sitting next to you, just don’t delete things and expect your life to be simple).

4 Likes

Thanks for the confirmations!

Personnally, I had some issues with files and folder permissions when I did the switch on linux. Oher than that the switch was withiut issues.

The inspect utility did warn me to use the InfluxDB user. Which is very convenient to avoid permissions issues. This on 1.6.4. Maybe older versions didn’t warn about it.

By the way, everything went very smoothly.