Telegraf from 1.18 to 1.30

Hello All,

I have a docker telegraf 1.18 and it’s working fine. Whenever a new file uploads into the s3 bucket it triggers the lambda function and it triggers the kinesis and telegraf and update the data into the influxdb.

Now i changed the telegraf to 1.30 and I changed the few telegraf config files to run the telegraf container:

My config files are as below:

4g-agent-telegraf.conf:
[agent]
interval = “10s”
debug = false
metric_buffer_limit = 100000
metric_batch_size = 8000
omit_hostname = true
logtarget = “file”
logfile = “/etc/telegraf/log/telegraf.log”
logfile_rotation_max_size = “10MB”
logfile_rotation_max_archives = 3

4g-input-telegraf.conf:
[[inputs.kinesis_consumer]]
region = “${REGION}”
max_undelivered_messages = 500000 #2,00,000 #20,00,000
#interval = “1s” #6s
access_key =“${ACCESS_KEY}”
secret_key = “${SECRET_KEY}”
streamname = “chronos_4g_cu_om”
shard_iterator_type = “LATEST”
data_format = “csv”
csv_header_row_count = 1
csv_tag_columns = [“column names”]

processor-telegraf.conf:
[[processors.converter]]
[processors.converter.tags]
measurement = [“type”]
[processors.converter.fields]
tag = [“BcMacId”,“operatorName”,“eNBName”,“PlmnIdentity”,“cellName”,“eNodeBId”,“NoOfController”]

telegraf container logs:
2024-06-24T07:19:28Z I! Starting Telegraf 1.30.0 brought to you by InfluxData the makers of InfluxDB
2024-06-24T07:19:28Z I! Available plugins: 233 inputs, 9 aggregators, 31 processors, 24 parsers, 60 outputs, 6 secret-stores
2024-06-24T07:19:28Z I! Loaded inputs: kinesis_consumer (4x)
2024-06-24T07:19:28Z I! Loaded aggregators:
2024-06-24T07:19:28Z I! Loaded processors: converter
2024-06-24T07:19:28Z I! Loaded secretstores:
2024-06-24T07:19:28Z I! Loaded outputs: file (2x) influxdb_v2 (6x)
2024-06-24T07:19:28Z I! Tags enabled:
2024-06-24T07:19:28Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:“”, Flush Interval:10s

In the cloud watch I am able to see the lambda function triggered and data pushed to stream. But the telegraf not able to read the data.

Any config which I missed?

Thanks in Advance.

Can anyone help here?

@BHARATH_KUMAR Is it that Telegraf isn’t able to read the data or is it not able to write it to InfluxDB. I can see that outputs are enabled, but there weren’t any outputs in the configuration files you provided.

@scott
From kinesis I am able to see the message data pushed to stream… Later telegraf is not able to read and I am not seeing any error in telegraf logs

[[outputs.influxdb_v2]]

Test bucket

urls = [“http://influxdb:8086”]
tagexclude = [“data_type”,“device_type”,“host”,“operator”]
namedrop = [“exe_*”]
fieldexclude = [“TimeInUTCOffset”,“time”,“date”,“NodeCellId”]
token = “${INFLUXDBTOKEN}”
organization = “CS”
bucket = “$BUCKET”
flush_interval = “10s”
metric_buffer_limit = 120000
[outputs.influxdb_v2.tagpass]
operator = [“CS”]
data_type = [“test”]

[[outputs.file]]
files = [“stdout”]

The above is the output-telegraf conf file.

I am not able to see the data in influxdb and on the docker logs as well

Could anyone please help me?

Thanks in Advance

fwiw I’ve seen folks have issues with kinesis_consumer before, but never heard back from users as to what they found out to say one way or another:

The theory was a configuration issue, but I don’t know what to point at.

Thanks for your reply… Let me go through this doc.

Actually they are getting some info in the logs but I am not getting any logs from the telegraf after uploading some files.

Hello All,

I tried changing the input configuration as below:

[[inputs.kinesis_consumer]]

## Amazon REGION of kinesis endpoint.

region = “${REGION}”
max_undelivered_messages = 500000 #2,00,000 #20,00,000
#interval = “1s” #6s
access_key =“${ACCESS_KEY}”
secret_key = “${SECRET_KEY}”
role_arn = “${EC2_PROFILE}”
streamname = “$stream_name”
shard_iterator_type = “LATEST”
data_format = “csv”

merge = “override”

csv_header_row_count = 1

csv_tag_columns = [“ver”,“node”,“operatorName”,“type”,“data_type”,“cellId”,“node_cellId”,“FriendlyName”,“NoOfController”,“SwVersion”,“Band”,“City”,“CustomerName”,“State”,“VenueType”,“HwVersion”,“USID”,“Region”,“Market”,“Transport”,“Venue”,“device_type”,“DASType”,“NuetralHostName”,“Deployment”]
csv_timestamp_format = “2006/01/02T15:04:05”

This is the kinesis input plugin configuration. Earlier for 1.18 version i used profile=“{$EC2_PROFILE}”
where we created a role for EC2 and it worked for it… Later for 1.30 version with the same config I am getting the error as below:

[telegraf] Error running agent: starting input inputs.kinesis_consumer: failed to get shared config profile,

Later I commented the profile and added the role_arn where I passed the role_arn as kinesis stream arn which it worked in one setup and again with the same config it didn’t worked for another setup.

Can anyone help where I went wrong?

output.conf:

[[outputs.influxdb_v2]]

Test bucket

urls = [“http://influxdb:8086”]
tagexclude = [“data_type”,“device_type”,“host”,“operator”]
namedrop = [“exe_*”]
fieldexclude = [“TimeInUTCOffset”,“time”,“date”,“NodeCellId”]
token = “${INFLUXDBTOKEN}”
organization = “CS”
bucket = “$bucket_name”
flush_interval = “10s”
metric_buffer_limit = 120000

[outputs.influxdb_v2.tagpass]

operator = [“CS”]

data_type = [“test”]

And I want to understand why I am not able to add profile as it is throwing error in 1.30 but not in 1.18 version

Thanks in Advance.

Could anyone help me in this?

shared config profile

As far as I know, there were no intentional changes to remove support.

v1.18 was released in July of 2020, that means there is almost 4 years of changes between the two versions you are trying to move to. It doesn’t make it easy to determine what might have changed.

[telegraf] Error running agent: starting input inputs.kinesis_consumer: failed to get shared config profile,

The biggest change I can think of is the move to the v2 AWS SDK. Looking at the SDK itself and that error message it appears to show up when in fact the config does not actually exist: aws-sdk-go-v2/config/config.go at 0e07cc82b25692dce8f68e0b5bd0d0c5cdbcd279 · aws/aws-sdk-go-v2 · GitHub

Searching the web I found a couple posts about this error as well. It seems a number of issues due to anything from quotes, extra params and more: