We are trying to use the jinja2 input plugin to monitor some kafka jmx counters.
Our basic telegraf config is:
[[inputs.jolokia2_agent]]
urls = [“http://localhost:50000/jolokia”]
[[inputs.jolokia2_agent.metrics]]
name = “kafka_server”
mbean = “kafka.server:type=,name=”
field_prefix = “$1.”
field_name = “$2”
Testing the config everything looks fine.
$ telegraf --config kafka.conf --test
- Plugin: inputs.jolokia2_agent, Collection 1
kafka_server,jolokia_agent_url=http://localhost:50564/jolokia,host=mll-kbrk0001a.surescripts-lt.int ReplicaManager.LeaderCount=216,BrokerTopicMetrics.MessagesInPerSec=“messages”,BrokerTopicMetrics.FailedProduceRequestsPerSec=0,BrokerTopicMetrics.BytesOutPerSec=5829305923,ReplicaManager.IsrShrinksPerSec=“SECONDS”,KafkaServer.yammer-metrics-count=6167,BrokerTopicMetrics.BytesInPerSec=0.0000000000000027675887095894554,ReplicaManager.UnderReplicatedPartitions=0,BrokerTopicMetrics.BytesRejectedPerSec=0,KafkaRequestHandlerPool.RequestHandlerAvgIdlePercent=3892651689053460,SessionExpireListener.ZooKeeperReadOnlyConnectsPerSec=“readonlyconnects”,KafkaServer.ClusterId=“-uXw2MgnR4SRYJCzCSq_IQ”,KafkaServer.BrokerState=3,BrokerTopicMetrics.ReplicationBytesInPerSec=“SECONDS”,SessionExpireListener.ZooKeeperExpiresPerSec=0,SessionExpireListener.ZooKeeperDisconnectsPerSec=0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002964393875,BrokerTopicMetrics.ReplicationBytesOutPerSec=799732.7596012988,SessionExpireListener.ZooKeeperAuthFailuresPerSec=0,SessionExpireListener.ZooKeeperSyncConnectsPerSec=6,SessionExpireListener.ZooKeeperSaslAuthenticationsPerSec=“saslauthentications”,BrokerTopicMetrics.TotalProduceRequestsPerSec=“requests”,BrokerTopicMetrics.TotalFetchRequestsPerSec=2966.576467040573,BrokerTopicMetrics.FailedFetchRequestsPerSec=“requests”,ReplicaManager.PartitionCount=575,ReplicaManager.FailedIsrUpdatesPerSec=0,ReplicaManager.IsrExpandsPerSec=0.00000000009298240433792074 1517515447000000000
When I reload the telegraf service the logs start getting spammed with.
2018-02-01T20:00:45Z E! Field type conflict, dropping conflicted points: Response Error: Status Code [400], expected [204], [partial write: field type conflict: input field “BrokerTopicMetrics.TotalProduceRequestsPerSec” on measurement “kafka_server” is type float, already exists as type string dropped=22]
2018-02-01T20:00:55Z E! Field type conflict, dropping conflicted points: Response Error: Status Code [400], expected [204], [partial write: field type conflict: input field “SessionExpireListener.ZooKeeperAuthFailuresPerSec” on measurement “kafka_server” is type string, already exists as type float dropped=13]
2018-02-01T20:01:05Z E! Field type conflict, dropping conflicted points: Response Error: Status Code [400], expected [204], [partial write: field type conflict: input field “BrokerTopicMetrics.TotalFetchRequestsPerSec” on measurement “kafka_server” is type string, already exists as type float dropped=13]
My initial thought is a previous config must have formated the data as a float but changing the config has it formated as a string now.
So I logged into influx and dropped the measurement.
DROP MEASUREMENT kafka_server;
SHOW MEASUREMENTS;
No longer shows the kafka_server measurement, but the errors still persist.
Any ideas on what could be going on?