I am testing a patch to Influx that I wrote to fix the 2 or 3GB data limit on 32 bit systems (see Github issues below). I’m having trouble testing the fix because I am having difficulty actually getting the data directory to a sufficient size. I filled it to a size of around 300 MB to a single measurement and have copied those data to other measurements using select * into X from Y. I need to do this multiple times to begin to get a good DB size, and it is sloooooow . As in, “I need to run it overnight and it’s still not done” slow. I’m running on an embedded system, so I expect it to be slower than a big ol’ server, but it’s almost as slow as just streaming random data to the DB (which is how I got the 300M to begin with). Certainly copying data from one measurement to another should be much faster, right?
Another problem is that Influx appears to be compacting the TSM files almost as fast as I can populate them in some instances, even when I set the cold duration to something huge like 40000h. I see compaction running as soon as I start Influx no matter what I set it to, and in top I can see influxd chugging along working hard. I’d like to disable compaction entirely, but I don’t seem to be tweaking the right dial.
So essentially I’m looking for guidance on how to fill up my database as efficiently as possible and keep it at a large size so I can test my fix before doing a PR. Any ideas anyone?
opened 10:06AM - 31 Mar 16 UTC
closed 03:23PM - 31 Mar 16 UTC
I know my system resources are limited and I don't expect high performance. I'm … mostly writing (a lot of) points, query performance is not an issue.
After writing for a few days, InfluxDB crashes and cannot start again.
0.11: Startup fails
0.12 nightly: InfluxDB starts, but cannot bring up the broken database (Edit: This might also be related to #6117, but the same message appears on 0.11):
```
[store] 2016/03/31 09:40:40 Failed to open shard: 33: [shard 33] error opening memory map for file /var/lib/influxdb/data/crawler_production/default/33/000000646-000000006.tsm: cannot allocate memory
```
- ~1000 points/s
- The crash occurs always after about 2-4 days (~ 200-400 million points)
Logfile, file list, CPU and Memory profile see below.
If starting the same database on a "normal" system with more RAM, everything is fine (It's takes about half an hour to catch up on compacting tsm files). The compacted database can be used on the Raspberry Pi again for a few days.
Are there some settings that can be tweaked? Is it possible to decrease the time for a shard from 7 to maybe 1 day? I tested with lower values for cache-max-memory-size and cache-snapshot-memory-size but that didn't seem to help.
### Environment
- Raspberry Pi 2 (1 GB RAM)
- Debian/Raspbian Jessie
- Builds from "s3.amazonaws.com/influxdb" (influxdb_nightly_armhf.deb and influxdb_0.11.0-1_armhf.deb)
- version 0.12.0~n201603301305, branch master, commit acc8f3e211cf54f13739d5a9584420c7bfd6f716
### Logs
```
[run] 2016/03/31 09:43:57 InfluxDB starting, version 0.12.0~n201603301305, branch master, commit acc8f3e211cf54f13739d5a9584420c7bfd6f716
[run] 2016/03/31 09:43:57 Go version go1.4.3, GOMAXPROCS set to 4
[run] 2016/03/31 09:43:57 Using configuration at: /etc/influxdb/influxdb.conf
[run] 2016/03/31 09:43:57 writing CPU profile to: /tmp/influxdcpu.prof
[run] 2016/03/31 09:43:57 writing mem profile to: /tmp/influxdmem.prof
[store] 2016/03/31 09:43:57 Using data dir: /var/lib/influxdb/data
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/29
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/30
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/1
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:57 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/28
[filestore] 2016/03/31 09:43:57 /var/lib/influxdb/data/_internal/monitor/30/000000002-000000002.tsm (#0) opened in 3.235563ms
[cacheloader] 2016/03/31 09:43:57 reading file /var/lib/influxdb/wal/_internal/monitor/1/_00001.wal, size 161501
[cacheloader] 2016/03/31 09:43:57 reading file /var/lib/influxdb/wal/_internal/monitor/30/_00509.wal, size 0
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/29/000000002-000000002.tsm (#0) opened in 7.284612ms
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/29/_00509.wal, size 0
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/28/000000003-000000002.tsm (#0) opened in 6.615292ms
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/28/_00514.wal, size 0
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/30 database index loaded in 60.028611ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/30 opened in 72.291853ms
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/29 database index loaded in 59.391582ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/29 opened in 73.057527ms
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/35
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/31
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/35/000000016-000000004.tsm (#0) opened in 705.362µs
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/35/_00100.wal, size 0
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/31/000000001-000000001.tsm (#0) opened in 774.425µs
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/31/_00504.wal, size 0
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/28 database index loaded in 73.246225ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/28 opened in 89.55034ms
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/36
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/36/000000004-000000003.tsm (#0) opened in 597.394µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/36/000000005-000000001.tsm (#1) opened in 499.217µs
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/36/_00006.wal, size 420491
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/35 database index loaded in 36.393217ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/35 opened in 50.435673ms
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/32
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/31 database index loaded in 39.215291ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/31 opened in 53.873214ms
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/32/000000009-000000002.tsm (#0) opened in 983.174µs
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/32/_00256.wal, size 0
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL writing to /var/lib/influxdb/wal/crawler_production/default/26
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/26/000000159-000000005.tsm (#0) opened in 3.184104ms
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/26/000000161-000000004.tsm (#1) opened in 775.467µs
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/crawler_production/default/26/_00686.wal, size 0
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/32 database index loaded in 86.313111ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/32 opened in 92.100436ms
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL writing to /var/lib/influxdb/wal/_internal/monitor/34
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/34/000000016-000000004.tsm (#0) opened in 688.904µs
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/34/_00100.wal, size 0
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/34 database index loaded in 42.4695ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/34 opened in 51.893897ms
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/26 database index loaded in 134.478165ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/26 opened in 144.080999ms
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/31 09:43:58 tsm1 WAL writing to /var/lib/influxdb/wal/crawler_production/default/33
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000000422-000000006.tsm (#0) opened in 1.596974ms
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000000646-000000005.tsm (#1) opened in 807.757µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001286-000000005.tsm (#8) opened in 756.404µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000000646-000000006.tsm (#2) opened in 111.25µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001286-000000006.tsm (#9) opened in 1.030934ms
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001334-000000005.tsm (#10) opened in 523.123µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000000774-000000006.tsm (#4) opened in 82.812µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001338-000000003.tsm (#11) opened in 321.926µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000000838-000000005.tsm (#5) opened in 467.707µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000000838-000000006.tsm (#6) opened in 147.291µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001344-000000002.tsm (#13) opened in 372.187µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001345-000000001.tsm (#14) opened in 327.031µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001126-000000006.tsm (#7) opened in 880.571µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001342-000000003.tsm (#12) opened in 398.019µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000001346-000000001.tsm (#15) opened in 382.968µs
[filestore] 2016/03/31 09:43:58 /var/lib/influxdb/data/crawler_production/default/33/000000774-000000005.tsm (#3) opened in 557.238µs
[store] 2016/03/31 09:43:58 Failed to open shard: 33: [shard 33] error opening memory map for file /var/lib/influxdb/data/crawler_production/default/33/000000646-000000006.tsm: cannot allocate memory
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/1/_00002.wal, size 17484
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/1/_00003.wal, size 0
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/1 database index loaded in 47.348859ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/1 opened in 400.419557ms
[cacheloader] 2016/03/31 09:43:58 reading file /var/lib/influxdb/wal/_internal/monitor/36/_00088.wal, size 0
[tsm1] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/36 database index loaded in 99.464005ms
[store] 2016/03/31 09:43:58 /var/lib/influxdb/data/_internal/monitor/36 opened in 770.594054ms
[subscriber] 2016/03/31 09:43:58 opened service
[monitor] 2016/03/31 09:43:58 Starting monitor system
[monitor] 2016/03/31 09:43:58 'build' registered for diagnostics monitoring
[monitor] 2016/03/31 09:43:58 'runtime' registered for diagnostics monitoring
[monitor] 2016/03/31 09:43:58 'network' registered for diagnostics monitoring
[monitor] 2016/03/31 09:43:58 'system' registered for diagnostics monitoring
[cluster] 2016/03/31 09:43:58 Starting cluster service
[shard-precreation] 2016/03/31 09:43:58 Starting precreation service with check interval of 10m0s, advance period of 30m0s
[monitor] 2016/03/31 09:43:58 Storing statistics in database '_internal' retention policy 'monitor', at interval 10s
[snapshot] 2016/03/31 09:43:58 Starting snapshot service
[copier] 2016/03/31 09:43:58 Starting copier service
[admin] 2016/03/31 09:43:58 Starting admin service
[admin] 2016/03/31 09:43:58 Listening on HTTP: [::]:8083
[continuous_querier] 2016/03/31 09:43:58 Starting continuous query service
[httpd] 2016/03/31 09:43:58 Starting HTTP service
[httpd] 2016/03/31 09:43:58 Authentication enabled: false
[httpd] 2016/03/31 09:43:58 Listening on HTTP: [::]:8086
[retention] 2016/03/31 09:43:58 Starting retention policy enforcement service with check interval of 30m0s
[run] 2016/03/31 09:43:58 Listening for signals
[run] 2016/03/31 09:45:41 Signal received, initializing clean shutdown...
[run] 2016/03/31 09:45:41 Waiting for clean shutdown...
[run] 2016/03/31 09:45:41 CPU profile stopped
[run] 2016/03/31 09:45:41 second signal received, initializing hard shutdown
```
### File List
```
$ sudo ls -lh /var/lib/influxdb/data/crawler_production/default/33/
total 1.3G
-rw-r--r-- 1 influxdb influxdb 134M Mar 29 14:09 000000422-000000006.tsm
-rw-r--r-- 1 influxdb influxdb 72M Mar 30 00:25 000000646-000000005.tsm
-rw-r--r-- 1 influxdb influxdb 206M Mar 30 01:13 000000646-000000006.tsm
-rw-r--r-- 1 influxdb influxdb 41M Mar 30 06:22 000000774-000000005.tsm
-rw-r--r-- 1 influxdb influxdb 246M Mar 30 07:13 000000774-000000006.tsm
-rw-r--r-- 1 influxdb influxdb 21M Mar 30 09:17 000000838-000000005.tsm
-rw-r--r-- 1 influxdb influxdb 267M Mar 30 15:58 000000838-000000006.tsm
-rw-r--r-- 1 influxdb influxdb 94M Mar 30 22:38 000001126-000000006.tsm
-rw-r--r-- 1 influxdb influxdb 52M Mar 31 05:58 000001286-000000005.tsm
-rw-r--r-- 1 influxdb influxdb 146M Mar 31 06:44 000001286-000000006.tsm
-rw-r--r-- 1 influxdb influxdb 16M Mar 31 08:11 000001334-000000005.tsm
-rw-r--r-- 1 influxdb influxdb 1.4M Mar 31 08:22 000001338-000000003.tsm
-rw-r--r-- 1 influxdb influxdb 1.4M Mar 31 08:33 000001342-000000003.tsm
-rw-r--r-- 1 influxdb influxdb 694K Mar 31 08:38 000001344-000000002.tsm
-rw-r--r-- 1 influxdb influxdb 340K Mar 31 08:41 000001345-000000001.tsm
-rw-r--r-- 1 influxdb influxdb 341K Mar 31 08:44 000001346-000000001.tsm
```
### CPU/Memory Profile
[cpu_mem_prof.zip](https://github.com/influxdata/influxdb/files/197370/cpu_mem_prof.zip)
opened 10:29AM - 05 Aug 18 UTC
closed 04:08PM - 09 Oct 18 UTC
__System info:__ rpi-docker-influxdata:1.4.3, Raspbian Stretch, Docker CE 18.03
…
Logs start with
```
[I] 2018-08-04T20:24:43Z InfluxDB starting, version 1.4.3, branch 1.4, commit 60d27e6995558f38a39e90b35a92cbac080310a3
[I] 2018-08-04T20:24:43Z Go version go1.9.2, GOMAXPROCS set to 4
```
__Steps to reproduce:__
Unknown.
__Expected behavior:__ InfluxDB has been running for some months quietly and peacefully.
__Actual behavior:__ Yesterday morning InfluxBD started writing a lot of TSM files under shard 197. Historically it has written about one file per day, but now it started writing 11 files per minute. It continued writing the files under shard 197 until disk space ran out several hours later. After some restarts of the Influx container and log compression, enough disk space was freed to be able to inspect the system.
InfluxDB itself seems to be somewhat operational, but cannot access any data written after 2018-07-30:
```
> select count(temperature) from ruuvitag where time >= '2018-07-29T00:00:00Z' and time < '2018-08-30T00:00:00Z';
name: ruuvitag
time count
---- -----
1532822400000000000 190089
> select count(temperature) from ruuvitag where time >= '2018-07-30T00:00:00Z' and time < '2018-08-31T00:00:00Z';
>
```
which seems to correspond to data written under the "197" directory based on mtimes (see ls listings below).
It's not clear to me what caused the files to be written. Is there some way to inspect contents of TSM files? Logs from the containers that write to the database were not retained, so it's not possible to find out what was going on when the incident started.
__Additional info:__
Old TSM files and start of the inrush:
```
-rw-r--r-- 1 root root 13M Jul 31 03:14 000000001-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 1 07:38 000000002-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 2 07:20 000000003-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 3 08:38 000000004-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 4 08:12 000000005-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 4 08:12 000000006-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 4 08:12 000000007-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 4 08:12 000000008-000000001.tsm
-rw-r--r-- 1 root root 12M Aug 4 08:12 000000009-000000001.tsm
...
```
Hourly file counts:
```
pi@tau:/data/influxdb/data $ sudo ls -ltrh tag_data/autogen/197 | cut -d ' ' -f6- | cut -d':' -f1 | uniq -c
1
1 Jul 31 03
1 Aug 1 07
1 Aug 2 07
1 Aug 3 08
446 Aug 4 08
548 Aug 4 09
535 Aug 4 10
544 Aug 4 11
552 Aug 4 12
517 Aug 4 13
472 Aug 4 14
437 Aug 4 15
259 Aug 4 16
```
There are no errors in the logs when running queries past 2018-07-30, but there are some sporadical errors related to compaction of other data directories:
```
[I] 2018-08-05T10:14:58Z error replacing new TSM files: cannot allocate memory engine=tsm1
[I] 2018-08-05T10:14:59Z error replacing new TSM files: cannot allocate memory engine=tsm1
[I] 2018-08-05T10:15:00Z beginning full compaction, 8 TSM files engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000001-000000001.tsm (#0) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000002-000000001.tsm (#1) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000003-000000001.tsm (#2) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000004-000000001.tsm (#3) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000005-000000001.tsm (#4) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000006-000000001.tsm (#5) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000007-000000001.tsm (#6) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/188/000000007-000000002.tsm (#7) engine=tsm1
[I] 2018-08-05T10:15:00Z beginning full compaction, 7 TSM files engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/179/000000001-000000001.tsm (#0) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/179/000000002-000000001.tsm (#1) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/179/000000003-000000001.tsm (#2) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/179/000000004-000000001.tsm (#3) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/179/000000005-000000001.tsm (#4) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/179/000000006-000000001.tsm (#5) engine=tsm1
[I] 2018-08-05T10:15:00Z compacting full /var/lib/influxdb/data/tag_data/autogen/179/000000006-000000002.tsm (#6) engine=tsm1
```
There are also some errors related to inability to memory-map files.
Near startup:
```
[I] 2018-08-04T20:24:46Z /var/lib/influxdb/data/tag_data/autogen/89/000000008-000000002.tsm (#0) opened in 2.577395ms engine=tsm1 service=filestore
[I] 2018-08-04T20:24:46Z Failed to open shard: 89: [shard 89] error opening memory map for file /var/lib/influxdb/data/tag_data/autogen/89/000000008-000000002.tsm: cannot allocate memory service=store
[I] 2018-08-04T20:24:46Z /var/lib/influxdb/data/tag_data/autogen/98/000000007-000000002.tsm (#0) opened in 2.981094ms engine=tsm1 service=filestore
[I] 2018-08-04T20:24:46Z Failed to open shard: 98: [shard 98] error opening memory map for file /var/lib/influxdb/data/tag_data/autogen/98/000000007-000000002.tsm: cannot allocate memory service=store
```
And about shard 197:
```
[I] 2018-08-04T20:24:50Z Failed to open shard: 197: [shard 197] error opening memory map for file /var/lib/influxdb/data/tag_data/autogen/197/000000005-000000001.tsm: cannot allocate memory service=store
```
Replying to my own question, there’s a utility called influx_stress that’s included with influx. I’m using it to put random data into the database, apparently in a more efficient way than I was before because the increase in MB/sec is much higher than either the randomized data stream or the select * into methods I was trying before. It appears to be a good fit for my test case.
Edit: heads up that influx_stress included with influxdb is different than influx-stress, also written and maintained by influxdata. The influx-stress tool has more features useful for my test case.
Edit2: Apparently the data in influx_stress as well as influx-stress are not random. They are simply the same values over and over. When compacted, they get squished to almost nothing due to run length encoding. This isn’t a solution after all.
maltk
September 18, 2019, 3:35pm
4
We also have problems with compaction overloading our 32 bit computer.
Did you find a way to disable compaction ?
(Almost all our data is just waiting to be uploaded to another DB ( where we will compact it ) )
TIA