Copy new data in bucket from influxdb2 to another influxdb2 without root access

Hello,

Situation:
I have a photovoltaic system where the complete data is stored in a Powerbox on which an InfluxDB V2.2.0 is installed. Bucketname: “solar”. I don’t have root access and only viewer access + token. The data there will be deleted after 90 days. Because of that, I would like to transfer all data to an InfluxDB v2.7.6 that runs as addon in my home assistant. This allows me to make appropriate evaluations of the data and statistics that do not only go back 3 months.

Current configuration:

  • Powerbox with local access to InfluxDB 2.2.0 192.168.0.114:8086
  • home assistant on a Synology NAS in Docker with an InfluxDB 2.7.6 → 192.168.0.16:8086
  • Telegraf, but it can’t bind the IP (see log)

Log:

> 2024-04-30T08:17:09Z I! Loading config file: /etc/telegraf/telegraf.conf
> 2024-04-30T08:17:09Z I! Starting Telegraf 1.26.0
> 2024-04-30T08:17:09Z I! Available plugins: 235 inputs, 9 aggregators, 27 processors, 22 parsers, 57 outputs, 2 secret-stores
> 2024-04-30T08:17:09Z I! Loaded inputs: influxdb_v2_listener
> 2024-04-30T08:17:09Z I! Loaded aggregators: 
> 2024-04-30T08:17:09Z I! Loaded processors: 
> 2024-04-30T08:17:09Z I! Loaded secretstores: 
> 2024-04-30T08:17:09Z I! Loaded outputs: influxdb_v2
> 2024-04-30T08:17:09Z I! Tags enabled: host=f9b96d0f-hassio-telegraf
> 2024-04-30T08:17:09Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"f9b96d0f-hassio-telegraf", Flush Interval:10s
> 2024-04-30T08:17:09Z E! [telegraf] Error running agent: starting input inputs.influxdb_v2_listener: listen tcp 192.168.0.114:8086: bind: cannot assign requested address
# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## Maximum number of unwritten metrics per output.  Increasing this value
  ## allows for longer periods of output downtime without dropping metrics at the
  ## cost of higher maximum memory usage.
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Log at debug level.
  # debug = false
  ## Log only error level messages.
  # quiet = false

  ## Log target controls the destination for logs and can be one of "file",
  ## "stderr" or, on Windows, "eventlog".  When set to "file", the output file
  ## is determined by the "logfile" setting.
  # logtarget = "file"

  ## Name of the file to be logged to when using the "file" logtarget.  If set to
  ## the empty string then logs are written to stderr.
  # logfile = ""

  ## The logfile will be rotated after the time interval specified.  When set
  ## to 0 no time based rotation is performed.  Logs are rotated only when
  ## written to, if there is no log activity rotation may be delayed.
  # logfile_rotation_interval = "0d"

  ## The logfile will be rotated when it becomes larger than the specified
  ## size.  When set to 0 no size based rotation is performed.
  # logfile_rotation_max_size = "0MB"

  ## Maximum number of rotated archives to keep, any older logs are deleted.
  ## If set to -1, no archives are removed.
  # logfile_rotation_max_archives = 5

  ## Pick a timezone to use when logging or type 'local' for local time.
  ## Example: America/Chicago
  # log_with_timezone = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false
[[outputs.influxdb_v2]]
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ##   ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
  urls = ["http://192.168.0.16:8086"]

  ## Token for authentication.
  token = "here_is_my_correct_token_stored_for_ha-influx"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "privat"

  ## Destination bucket to write into.
  bucket = "solar"

  ## The value of this tag will be used to determine the bucket.  If this
  ## tag is not set the 'bucket' option is used as the default.
  # bucket_tag = ""

  ## If true, the bucket tag will not be added to the metric.
  # exclude_bucket_tag = false

  ## Timeout for HTTP messages.
  # timeout = "5s"

  ## Additional HTTP headers
  # http_headers = {"X-Special-Header" = "Special-Value"}

  ## HTTP Proxy override, if unset values the standard proxy environment
  ## variables are consulted to determine which proxy, if any, should be used.
  # http_proxy = "http://corporate.proxy:3128"

  ## HTTP User-Agent
  # user_agent = "telegraf"

  ## Content-Encoding for write request body, can be set to "gzip" to
  ## compress body or "identity" to apply no encoding.
  # content_encoding = "gzip"

  ## Enable or disable uint support for writing uints influxdb 2.0.
  # influx_uint_support = false

  ## Optional TLS Config for use on HTTP connections.
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false
# Accept metrics over InfluxDB 2.x HTTP API
[[inputs.influxdb_v2_listener]]
  ## Address and port to host InfluxDB listener on
  ## (Double check the port. Could be 9999 if using OSS Beta)
  service_address = "192.168.0.114:8086"
  ## Maximum allowed HTTP request body size in bytes.
  ## 0 means to use the default of 32MiB.
  max_body_size = "0"

  ## Optional tag to determine the bucket.
  ## If the write has a bucket in the query string then it will be kept in this tag name.
  ## This tag can be used in downstream outputs.
  ## The default value of nothing means it will be off and the database will not be recorded.
  # bucket_tag = ""

  ## Set one or more allowed client CA certificate file names to
  ## enable mutually authenticated TLS connections
  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

  ## Add service certificate and key
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"

  ## Optional token to accept for HTTP authentication.
  ## You probably want to make sure you have TLS configured above for this.
token = "here_is_my_correct_token_stored_for_powerbox"

  ## Influx line protocol parser
  ## 'internal' is the default. 'upstream' is a newer parser that is faster
  ## and more memory efficient.
  # parser_type = "upstream"

I cannot find my fault. Is there an other possibility to store data.
192.168.0.114:8086 shows me the loginpage. After login, i can view the data in bucket solar.
A manual export and import to the other database works fine, but is no solution for permanent updates. Maybe someone has a better idea than using telegraf?
I hope you can help me.

The error is saying it cannot setup the listener on that address and port.

  • Powerbox with local access to InfluxDB 2.2.0 192.168.0.114:8086

Sounds like you have influxdb set up on that same address and port already. You need to pick a different port.

That said, this isn’t going to work. The influxdb listener, does exactly that, listens it does not connect to influxdb and collect data from it. If you want to dump data from InfluxDB then you will want to use the client libraries or from the InfluxDB UI itself.

Thank you for your help. I misunterstood the hints above the relevant lines, but your note helped me. Telegraf is now listening correctly, but still there is no data stored.

What else do I have to do, to get data from powerbox-IDB to my own IDB?

Look at the influx backup CLI command to pull data out of InfluxDB. You can then directly import that data with influx restore, use the client libraries to read and import it, and/or use telegraf to parse the files and import.

Hm, I would like to get this automateed. Manual Export - import does not make much sence. As I said before, I do not have root access to the powerbox, just a viewer access to the backend.