6000000 write in 1 min is this posable?

I have 5000K devices which ones are reading data from sensor 20 times in second.
I’m pushing all data to kinesis but I want to make it queryable
So I’m thinking to pool data from kinesis and store in influxDB.
I will have a lot of write and not much read. (50002060 ~ 6000000 write in 1 min)
I want to understand if influxeDB will fit to this requirement and what king of hardware I need to handle this ?

@rpuserh You should be able to acomodate this on an 8 core 16GB RAM instance. Something like a c4.2xlarge.

And is there any calculator to understand how much disk do I need?

@rpuserh Float and integer values will compress down to 2-3 bytes per point. So for every billion values you will need 3GB. String fields are compressed using Snappy so its not quite as good on that front.