Block read error on __PointKey__ StringArrayEncodeAll: source lenght too large

I’m facing some issues with my InfluxDB, which is allocating all the available RAM on the server.
I’m running InfluxDB v1.8.3 on Windows, it worked (almost) fine until now.

The only “error” I have in the log is this one
block read error on __PointKey__ StringArrayEncodeAll: source length too large

That measurement contains the SQL statement executed in SQL Server, which is saved as a field string. (and can be huge)

Here is the InfluxDB log

ts=2020-10-06T11:00:25.142416Z lvl=warn msg="Error compacting TSM files" log_id=0PgJeCr0000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0PgNLqZl000 op_name=tsm1_compact_group error="block read error on I:\\Monitoring\\influxdb\\SqlServerGT\\data\\monitor\\standard\\387\\000001039-000000002.tsm: encode error: unable to compress block type string for key 'sqlserver_requests,command=EXECUTE,host=SRV-MGMAPPPRD03,host_name=SRV-DLOAPPPRD02,nt_user_name=Svc_APDLOCOL-P,program_name=.Net\\ SqlClient\\ Data\\ Provider,session_db_name=MyDatabase,sql_instance=SRV-DLOSQLPRD01,status=running,stmt_db_name=MyDatabase,transaction_isolation_level=2-Read\\ Committed#!~#statement_text': StringArrayEncodeAll: source length too large, block read error on I:\\Monitoring\\influxdb\\SqlServerGT\\data\\monitor\\standard\\387\\000001039-000000002.tsm: encode error: unable to compress block type string for key 'sqlserver_requests,command=EXECUTE,host=SRV-MGMAPPPRD03,host_name=SRV-DLOAPPPRD02,nt_user_name=Svc_APDLOCOL-P,program_name=.Net\\ SqlClient\\ Data\\ Provider,session_db_name=MyDatabase,sql_instance=SRV-DLOSQLPRD01,status=running,stmt_db_name=MyDatabase,transaction_isolation_level=2-Read\\ Committed#!~#statement_text': StringArrayEncodeAll: source length too large"
ts=2020-10-06T11:00:30.138592Z lvl=info msg="TSM compaction (end)" log_id=0PgJeCr0000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0PgNLqZl000 op_name=tsm1_compact_group op_event=end op_elapsed=1691496.690ms

Can this “error” cause any issue? will the file be compacted or not?

Hello @Giovanni_Luisotto,
I don’t know the answer to your question I apologize for the delay. I’ve asked someone on the team to take a look and they’ll get back to you. Thank you for your patience.

1 Like

I’ve solved the issue regarding the RAM usage, the disk was too slow to keep up with all the data that were being written, Telegraf (which is on the same machine) reached its maximum buffer size and InfluxDB was also allocating a huge amount of RAM.

I don’t know how it works, but I can guess that all the data are kept in RAM until the disk is able to write them, and since the disk was too slow the RAM got saturated with data waiting to be written.

I’d still like to know if the following error prevents the shard from compacting or not. (I also know how big the data that broke it was… it was something like a SQL query of 43k lines)

1 Like