How to measure disk space requirement per bucket

Hi,
I’ve checked through the documentation but was unable to find anything that would allow me to identify how much disk space a bucket is using requiring.

I currently use a primitive data lake and bzip files older than a few days. I understand from the docs that influx dB has some compression built in, but it’s optimized for speed rather than smallest size which makes sense.

I need to quantify what the difference would be if I migrated our data into an influxdb compared to what we use today.

If it matters , please confirm if the makeup of columns and fields would affect the compression gained.

Thanks in advance

@scott or @Anaisdg , could either of you offer any advice ? Even if it’s an approximate

@FixTestRepeat to my knowledge, there isn’t a way (currently) to identify exactly how much disk space each bucket is using. This may be worth an issue/feature request on the InfluxDB repo.

In regards to compression, your data schema will affect compression. Some factors to consider are:

  • Series cardinality
  • Field types (ex: strings don’t compress as well as integers)

This topic was automatically closed 60 minutes after the last reply. New replies are no longer allowed.