Hi Community,
I’m migrating from InfluxDB 1.8 to InfluxDB OSS 2.1. Our use case consists of using InfluxDB for Buffering purpose, However I have a performance issue when retrieving the buffered data: the query takes to long to be compeleted ( 4m37s to load 159843 records), each record contain a field in form of JSON data and having the following schema :
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "array",
"items": [
{
"type": "object",
"properties": {
"TagName": {
"type": "string"
},
"Value": {
"type": "integer"
},
"TimeStamp": {
"type": "string"
},
"Type": {
"type": "string"
}
},
"required": [
"TagName",
"Value",
"TimeStamp",
"Type"
]
}
]
}
Each record also have 600 items in the JSON array. To read the data I’m using the follwing flux query via the C# Client library:
$"from(bucket: \"{dbName}\")\n"
+ " |> range(start: -30d)\n"
+ $"|> filter(fn: (r) => (r[\"_measurement\"] == \"{Buffer_Measurement}\"))"
+ $"|> filter(fn: (r) => (r[\"SchemaId\"] == \"{schemaId}\" ))";
I would like to optimize the reading time but I don’t find a way to do so, For that reason I’m looking forward to knwoing your recommendations.
Is it possible to read all the data in seperate queries and how can I index the read data. Please note that I used the limit() option but this option is causing memory increase on influx server process.
Thank you.