@grant1 Thank you for your suggestion.
Node RED also looks good way to take data from SQL.
I am in struggle for transform sql data to influxdb with multiple object.
Are you using “function” for transform data or using “change”?
@grant1 Hello again.
I find the way to transform object.
For the moment, I can use “split” sequence.
by the way, I notice above my flux only display last data row.
I will try to find how to solved.
In the MSSQL-PLUS node, I run a SELECT query that returns the measurements (called MeasuredValue) and timestamps (in epoch format, called DateTimeTaken).
@grant1
Thank you for your reply.
I’m trying to use your method.
Debug output looks good. but I have error message.
I will continue to find out reason.
@Yutaka It could be simplified a little. You can actually use influxdb.wideTo() to write pivoted data back to InfluxDB. So you don’t need separate variables to structure each field. The following example just takes the queried SQL data and appends two columns to each returned row: _time and _measurement.
import "sql"
import "influxdata/influxdb"
sql.from(
driverName: "mysql",
dataSourceName: "usradmin:passwd@tcp(localhost:3306)/prod",
query: "SELECT PJ AS PJ,task AS task FROM prod.linkdata",
)
|> map(fn: (r) => ({r with _time: now(), _measurement: "test"}))
|> influxdb.wideTo(bucket: "mariadbtest")
@scott
Thank you about informaion.
When I try to use this method following error apper.
I tried few things as on the comment section in flux.
but I have no luck.
Do you have any suggestion?
Hello @scott
Sorry. I made misunderstood of error message.
Group works fine now.
But I notice after apply above flux, when I check bucket only display last data row.
Is there any way to writing all data from SQL?
I must say sorry first.
I was using “influxdb out” for storage.
When I use “influxdb batch”. your function node worked.
But, when I check influxdb bucket only display last data row.
Are your sure your SQL query is returning more than one row? It looks like you’re having the same issue with NodeRed, so I’m thinking it’s the SQL query that is returning only the last row.
Ah, I see. InfluxDB uniquely identifies points by their time stamp, tag set, and field key. If you right a point with the same timestamp, tag set, and field of a point that has already been written, InfluxDB will update the existing data with the more recent data.
All the points you’re writing have the same time stamp, tag set, and field keys. So InfluxDB is writing them all, but is immediately overwriting the other points because it sees them as the same point, just with updated data.
So one approach would be to update the timestamp of each row incrementally. This get’s a little tricky, but here’s an example:
import "date"
import "sql"
import "influxdata/influxdb"
sql.from(
driverName: "mysql",
dataSourceName: "usradmin:passwd@tcp(localhost:3306)/prod",
query: "SELECT PJ AS PJ,task AS task FROM prod.linkdata",
)
|> map(fn: (r) => ({r with _time: now(), _measurement: "test"}))
|> group(columns: ["_measurement"])
|> map(fn: (r) => ({r with index: 1}))
|> cumulativeSum(column: "index")
|> map(fn: (r) => ({r with _time: date.add(d: date.scale(d: 10s, n: r.index), to: r._time)}))
|> influxdb.wideTo(bucket: "mariadbtest")