Performance to the extreme

Hello! We deal with projects with extreme and various database requirements. Some projects require high number of measurements (tens of thousands) at moderate data rate, some projects require moderate number of measurements at high write rate (hundreds of thousands points per second per measurement).

We would like to use one database node if possible, to simplify the installations and maintenance.

We did basic tests on a single machine (SSD, i7, 16GB RAM) on an empty database, using HTTP. Like recommended, we found 100k points / batch optimal.


  • Single write client could write up to 250k points / second.
  • By adding up to 5 clients we could push write rate up to 400k points / second in total.

Is this what would be expected or could we somehow push the limits even more?


We recently published some benchmarking examples while comparing InfluxDB against Cassandra for Time Series workloads.

You can read more about this here:

More useful links for you:

Hello! Thank you for your reply. 2.8 mio points per second @ 4 concurrent writers seems 6 times faster than we were able to achieve. Did you use single node or cluster database setup? What hardware was being use to perform these tests?