Hello every one,
For our project’s needs, we need a TS database. We have tested thanks to POCS different kind of database and, now, we are testing InfluxDB which should suits to our needs.
Nevertheless, we saw that InfluxDB was optimized for SSD Storage, but, because our app will be for other customers, we are very constrained on the hardware side: Only HDD (because we cannot “force” client to buy a bunch of SSD’s, taking in account the RAID possibility too …).
Also, we made an estimation (in worst case) in which our database, with 20 years historical data, could have up to 260 Billions rows.
So for our tests/benchmarks, thanks to a bash script in which we use Influx CLI with -import parameter, we have created dynamically .txt files with Line Protocol syntax, which work really really fine. But something bothers us:
Even if we import data with files, how does the writting process, on HDD work? Does it take, by default, 10.000 series in memory and THEN write these 10.000 series in on shot (like a transaction in Relational Databases concepts?)
For now, we have a 10 Billion data base ingested, but we are afraid that, if it writes every single series (I assume not, even on a HDD, it would make no sense for a DBMS), it could kill prematurely our HDD and, in that case, we could not use InfluxDB as a longterm data storage system.
I hope I’ve been clear in my explainations and questions.