Export data from influxDB 2.0 to csv file or directly to another bucket

Hello,

I’m running influxDB 2.7.1, installed on a Debian
I currently have data on one bucket and I want to transfer them to a new bucket, but the new bucket doesn’t have same structure, so I’m looking how to make it properly.
I can see 2 options, but I wasn’t able to set it up yet.

  1. Export first bucket data to a csv file, then rework “manualy” the file, and import it on influxDB. I saw some instrcution saying to do a influx export or influx query, but I don’t have the program influx available, only influxd.
  2. Write a script that is collecting each value of current bucket and save it into the new bucket with the new structure. At first, I’m facing same problem of software influxd, to be able to run a query. Then I will need to find out how to write the correct query.

@Symptom I’d go with your 2nd proposed route. What is the schema of your current data and how does it need to change when moved to the new bucket?

@scott,

Ohh yes, I forgot to add this useful information.

Here is what I’m using on the old bucket to display the graph:

from(bucket: "Bucketv1")
  |> filter(fn: (r) => r["_measurement"] == "Index Base")
  |> filter(fn: (r) => r["CommandName"] == "Index Base")
  |> filter(fn: (r) => r["_field"] == "Hook")

And on the new bucket, it will be like that:

from(bucket: "Bucketv2")
  |> filter(fn: (r) => r["_measurement"] == "kWh")
  |> filter(fn: (r) => r["_field"] == "value")
  |> filter(fn: (r) => r["domain"] == "sensor")
  |> filter(fn: (r) => r["entity_id"] == "hook_index_base")

To be honest, I know how the structure of a SQL database works, and didn’t look yet how is influxDB one, but looks pretty different. So I don’t fill really confident to make query request from scrap right away.

@Symptom Ok, so just to clarify, you want to change the following:

  • Update _measurement to kWh
  • Rename the Hook field to value
  • Drop the CommandName tag
  • Add a domain tag with a value of sensor
  • Add an entity_id tag with the value of hook_index_base

Is this correct? Is there anything in the original data that you want to preserve outside of what’s listed here and timestamps?

And change from Bucketv1 to Bucketv2. Then it’s correct.

Otherwise, I don’t see anything else. I don’t see what else, special, could be added

Cool, ok, so the following query will query all data from Bucketv1 back to 1970-01-01T00:00:00Z (the Unix epoch) and re-map columns to new values. It uses the existing values in the _time and _value columns, but explicitly sets the other columns. It then writes the modified data to Bucketv2.

from(bucket: "Bucketv1")
    |> range(start: 0)
    |> filter(fn: (r) => r["_measurement"] == "Index Base")
    |> filter(fn: (r) => r["CommandName"] == "Index Base")
    |> filter(fn: (r) => r["_field"] == "Hook")
    |> map(fn: (r) => ({
        _time: r._time,
        _measurement: "kWh",
        _field: "value",
        _value: r._value,
        domain: "sensor",
        entity_id: "hook_index_base"
    }))
    |> to(bucket: "Bucketv2")

If you have too much data and this query takes a long time to run, you may need to run it in time-based batches rather than querying everything from the Unix epoch.

1 Like

It works perfectly.
Thank you!

No problem. Happy to help!