Data not observed in Influxdb after successful write from telegraf

I am running one container of Influxdb 2.0 and one container of telegraf and I am getting the user data and using telegraf as a plugin trying push into influxdb 2.0 but we are not getting the data in influxdb. We tried pushing data system data to an external file using telegraf and data is coming in the file. We tried taking that file and taking this file as input using telegraf plugin still we are not able to get data into the influxdb.
We tried to get the system metrics [[inputs.cpu]], [[inputs.disk]] and this data is coming in influxdb. We are getting internal metrics in influxdb as well.
When we put both [[input.cpu]], and our system data configuration in same telegraf.conf then we are getting data except for our system data.

System Information
Ubuntu - 19.02
Influxdb version - Quay
telegraf version - 1.15.3

Telegraf configuration

[agent]
  internal = "10s"
  debug = true
  quiet = false
[[outputs.influxdb_v2]]
   ## urls exp: http://127.0.0.1:8086
   ##urls = ["http://influxdb:8086"]
   urls = ["http://14.206.177.28:8086"]
   namedrop = ["exe_*"]
   token = "$INFLUX_TOKEN"
   organization = "org_name"
   bucket = "logs2"
[[outputs.influxdb_v2]]
   ## urls exp: http://127.0.0.1:8086
   ##urls = ["http://influxdb:8086"]
   urls = ["http://14.206.177.28:8086"]
   namepass = ["exe_*"]
   token = "$INFLUX_TOKEN"
   organization = "org_name"
   bucket = "logs"
[[inputs.kinesis_consumer]]
   region = "provided"
    access_key ="provided"
    secret_key = "provided"
    profile = "arn:aws:iam::666268854852:instance-profile/ec2_to_aws_admin"
    streamname = "new_stream"
    shard_iterator_type = "TRIM_HORIZON"
   data_format = "csv"
   csv_header_row_count = 1
   csv_tag_columns = ["node","operatorName","type"]
   csv_timestamp_column = "datetime"
   csv_timestamp_format = "2006-01-02T15:04:05"

[[inputs.influxdb]]
 urls = ["http://influxdb:8086/debug/vars"]
 timeout = "5s"
 name_prefix = "exe_"

[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]

The log information of the telegraf
2020-11-27T09:33:21Z I! Starting Telegraf 1.15.3
2020-11-27T09:33:21Z I! Using config file: /etc/telegraf/telegraf.conf
2020-11-27T09:33:21Z I! Loaded inputs: cpu disk kinesis_consumer influxdb
2020-11-27T09:33:21Z I! Loaded aggregators:
2020-11-27T09:33:21Z I! Loaded processors:
2020-11-27T09:33:21Z I! Loaded outputs: influxdb_v2 influxdb_v2
2020-11-27T09:33:21Z I! Tags enabled: host=26e286667de4
2020-11-27T09:33:21Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:“26e286667de4”, Flush Interval:10s
2020-11-27T09:33:21Z D! [agent] Initializing plugins
2020-11-27T09:33:21Z D! [agent] Connecting outputs
2020-11-27T09:33:21Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2020-11-27T09:33:21Z D! [agent] Successfully connected to outputs.influxdb_v2
2020-11-27T09:33:21Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2020-11-27T09:33:21Z D! [agent] Successfully connected to outputs.influxdb_v2
2020-11-27T09:33:21Z D! [agent] Starting service inputs
2020-11-27T09:33:31Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2020-11-27T09:33:31Z D! [outputs.influxdb_v2] Wrote batch of 10 metrics in 26.832907ms
2020-11-27T09:33:31Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics

From the above logs, we can see logs output plugins are connected and data is being written on influxdb but we are seeing measurement in the influxdb related to our data.

Thanks

Hi @Ravikant_Gautam, are you able to try the latest InfluxDB (2.0.2) and Telegraf (1.16.3) releases to see if the problem still occurs there? Or are you pinned to your current versions? InfluxDB would be especially important to try upgrading, many fixes have gone in since the last RC with that Docker tag.