Error - tags exceeding limit

I am monitoring the location of some bikes and using Grafana World Map Panel to locate these points on the map. It was working fine till today but then Influxdb stopped automatically. I started it again and found this in logs.

[I] 2017-04-06T07:33:35Z WARN: 85% of max-values-per-tag limit exceeded: (85707/100000), db=vehicleinfo shard=8 measurement=vehicleinfo tag=geohash service=shard

Here geohash is stored as tag, and is necessary to view locations on the worldmap panel. This warning is displayed again and again in the logs. But sometimes influxdb stops working.

What I am getting of this is that there might be a limit on the values of the tags which has been exceeded which is causing the influxdb to stop. Please explain this warning. If there is some limit on the values of these tags, can I change this limit OR can i remove this limit ? as I have to store these geohashes in large numbers.

Also, can I convert these tags into fields ? If yes, then how ?

I know that tags are those values on which we have to perform a query and fields are values which changes frequently and there are very few cases when we have to perform query on them(fields). Please tell me more differences between tags and fields.

@Luv This is a configuration setting. It does not cause the database to stop, it causes points to stop being written that have new tag values. Its important to understand the number of series being written.

The major difference between tags and fields is that tags are indexed and fields are not. I would suggest this article on line protocol as an introduction to our data model.

There is also a major feature we are working on which might help your usecase.

That was very helpful.

But one thing is, can we convert tags into fields OR fields into tags ?

@Luv There is no native way to update data in the database. You would need to rewrite the data.

1 Like

@jackzampolin Hey jack, I am getting this now in my logs.

[I] 2017-04-10T07:23:50Z Snapshot for path /var/lib/influxdb/data/_internal/monitor/15 written in 683.897491ms engine=tsm1
[I] 2017-04-10T07:24:05Z Snapshot for path /var/lib/influxdb/data/vehicleinfo/autogen/8 written in 4.759451815s engine=tsm1
[I] 2017-04-10T07:24:06Z beginning level 1 compaction of group 0, 2 TSM files engine=tsm1 [I] 2017-04-10T07:24:06Z compacting level 1 group (0) /var/lib/influxdb/data/vehicleinfo/autogen/8/000000005-000000001.tsm (#0) engine=tsm1
[I] 2017-04-10T07:24:06Z compacting level 1 group (0) /var/lib/influxdb/data/vehicleinfo/autogen/8/000000007-000000001.tsm (#1) engine=tsm1
fatal error: runtime: out of memory

runtime stack: runtime.throw(0xac3afa, 0x16) /usr/local/go/src/runtime/panic.go:566 +0x95
runtime.sysMap(0xc449d70000, 0x100000, 0xc30700, 0xe7b938) /usr/local/go/src/runtime/mem_linux.go:219 +0x1d0 runtime.(*mheap).sysAlloc(0xe623a0, 0x100000, 0xe623b8) /usr/local/go/src/runtime/malloc.go:407 +0x37a runtime.(*mheap).grow(0xe623a0, 0x1, 0x0) /usr/local/go/src/runtime/mheap.go:726 +0x62
runtime.(*mheap).allocSpanLocked(0xe623a0, 0x1, 0x7f2f00000001) /usr/local/go/src/runtime/mheap.go:630 +0x4f2 runtime.(*mheap).alloc_m(0xe623a0, 0x1, 0x7f0000000007, 0x7f2fd0c1d8a0) /usr/local/go/src/runtime/mheap.go:515 +0xe0 runtime.(*mheap).alloc.func1() /usr/local/go/src/runtime/mheap.go:579 +0x4b runtime.systemstack(0x7f2fdbffed48) /usr/local/go/src/runtime/asm_amd64.s:314 +0xab
runtime.(*mheap).alloc(0xe623a0, 0x1, 0x10000000007, 0x410b79) /usr/local/go/src/runtime/mheap.go:580 +0x73 runtime.(*mcentral).grow(0xe63980, 0x0) /usr/local/go/src/runtime/mcentral.go:210 +0x94 runtime.(*mcentral).cacheSpan(0xe63980, 0xc420198800) /usr/local/go/src/runtime/mcentral.go:91 +0xfa

Why out of memory ?

@Luv Because you are storing the lat/lng or geohash as a tag, you are ending up with too many series in the database. This is likely causing your OOM.

Then is it like a drawback of influx that we cannot have too many series ?

@Luv currently it is. There is some major work we are finishing to enable longer data retention for usecases like yours.