I installed four containers in my ec2 instance and every container is running fine. One of the containers in Telegraf and another one in influxdb. So I am trying to write the data from Telegraf to Influxdb and in Telegraf is coming from the AWS Kinesis. So after everything up and running data from kinesis is coming to the Telegraf but from telegraf data is not coming to Influxdb.
Here what I have changed in telegraf.conf file for getting data from Influxdb.
[[outputs.influxdb]]
urls = [“influxdb:8086”]
#urls = [“http://localhost:8086”]
database = “telegraf”
This is a snippet from my docker-compose.yml file
influxdb:
image: influxdb:1.8.2
container_name: influxdb
restart: always
ports:
- 8086:8086
networks:
- analytics
volumes:
- /mnt/db/:/mnt/db/
- /mnt/influx/:/mnt/influx/
- ./etc/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
telegraf:
image: telegraf
container_name: telegraf
restart: always
depends_on:
- influxdb
networks:
- analytics
volumes:
- telegraf-storage:/var/lib/telegraf
- ./etc/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf
environment:
INFLUXDB_URL: http://influxdb:8086
# - username=admin
# - password=admin
links:
- influxdb
This is the error, I am getting. I checked port is also listening. Data is also coming in Telegraf from Kinesis.
[outputs.influxdb] When writing to [http://influxdb:8086]: database "telegraf" creation failed: Post "http://influxdb:8086/query": dial tcp 172.31.0.2:8086: connect: connection refused
The error is like this:
You are writing to a database that does not exists, and the user you are using does not have permissions to create the database (and that’s fine to me since that requires admin permission).
Have a look at your influxDB user permissions and databases, you can create the database manually and give the user appropriate permissions (just WRITE in this case)
No, I had created the database, and also I was able to see the data. That error is still there but I am able to get data into ‘telegraf’ and also view it in grafana. So, maybe I shouldn’t focus on that error.
Do you have any idea about seeing ‘Latest’ data using telegraf. Because when using shard_iterator_type as ‘TRIM_HORIZON’ i am able to view the data, but for case when i want to see latest data, i am not able to see it even though i have changed shard_iterator_type.
I’m no expert in AWS or Kinesis, so I can’t help you with that part. (I see shard_iterator_type is a config in the Kinesis input, but I don’t know what it does)
About Telegraf, you can’t see the data using Telegraf itself, you can use Chronograf and Grafana to visualize your data, or just query InfluxDB from the CLI but it’s not that nice to visualize data.
I don’t know what’s the data structure you get from the kinesis input, and what difference does it make “shard_iterator_type”.
does it return cumulative counters? or just the value in the current moment?
If you post a short sample of data I might be able to help
I am using csv format. According to aws documentation, ‘Latest’ shard_iterator_type give latest data that is being pushed into kinesis, and ‘Trim_horizon’ gives all the data that has being pushed till now.
I was referring this github link - telegraf/plugins/inputs/kinesis_consumer at master · influxdata/telegraf · GitHub
And here is my config file section for kinesis_consumer
In that github they have the format for input plugin, but I have a doubt here. For each input format they have given a format, like this they have given for csv format:
[[inputs.file]]
files = [“example”]
data_format = “csv”
csv_header_row_count = 0
csv_column_names =
csv_column_types =
csv_skip_rows = 0.
csv_skip_columns = 0
csv_delimiter = “,”
csv_comment = “”
csv_trim_space = false
csv_tag_columns =
csv_measurement_column = “”
csv_timestamp_format = “”
csv_timezone = “”
So, I am confused whether to write the above part in inputs.file section or in inputs.kinesis_consumer section below ‘data_format’ line.
Since you are writing to influxDB you should use data_format = "influx"
and you won’t need to do anything else from the telegraf side.
If you still don’t like something about the data structure you can apply some processor plugins (like override, converter or rename), but usually it’s not needed.
If you use any other data format (csv, json, etc) you will have to provide the mapping yourself, from format X to “influx”, and to do so properly you are better have a look at the docs about the InfluxDB line protocol