Is Flux being deprecated with InfluxDB 3.0?

InfluxDB 3.0 sure sounds like it’s going to provide a lot of performance improvements, and our team is interested in upgrading from 2.x. But we now have a lot of Grafana dashboards and custom query tooling that is all written in Flux, and all the documentation I see (including the new python influx3 client library) seems to have gone back to InfluxQL…is Flux being phased out in favor of going back to InfluxQL?

2 Likes

@Phil_Lundeen,
More emphasis is being placed on InfluxQL and SQL. DataFusion (which provides the SQL support) also has a Pandas api so eventually the hope is to natively support python and pandas with v3. But yes the benchmarks are quite impressive so far, and Flux was proving hard to adopt and slow for some use cases. But I understand how this feels like whiplash. Luckily SQL is so well documented and established it’s easy to use AI chatbots to create the queries for you with native english. We’re also here to help convert anything to pandas as well.

You can still create invokable scripts and tasks with cloud serverless:

What does this mean in respect of continuous queries? Will InfluxDB 3.0 get continuous queries again?

The reason I am asking is that I am looking at how best to migrate from 1.x, with one of the questions being how to deal with continuous queries.

Wait, is this the official notice that Flux is being deprecated/removed?

3 Likes

For those of us with a lot of infrastructure built up around flux (many Grafana dashboards and custom data analysis tools), it’s going to be a real pain point to upgrade from InffluxDB 2.x to 3.x if that all needs to be rewritten.

Will there be any migration tools provided to help with translating flux to SQL?

#FluxMyLife

2 Likes

We’re not yet sure how to support Flux in 3.0. We built a compatibility layer in our Cloud product that has a Flux process (written in Go) that communicates via gRPC to a 3.0 query engine. We’ve found through testing that it doesn’t perform very well (users see things that used to work that now time out) and there are a number of hidden bugs as the protocol is very different from how the underlying 3.0 query engine works.

We were able to bring InfluxQL support in natively because of its similarity to SQL, which is what our new query engine supports by default. That effort took a year and involved writing an InfluxQL parser in Rust that yielded an AST that could then be converted to DataFusion Logical Plans (SQL query plans), and then we implemented new features in the SQL engine to support some of the things that InfluxQL supported that the SQL engine didn’t have built in.

We don’t yet have the story for what we can do with Flux. It’s significantly more complex than InfluxQL as it’s an entire scripting engine. We’ll continue to support our Cloud2 platform and Flux on it for quite some time (no set future date at this point) as we do more work to figure out what a decent transition can look like.

2 Likes

Currently, we don’t have anything in 3.0 that maps to Continuous Queries. We have some ideas that we’ll be testing out in the near future for features that will map to that functionality. Think an embedded VM with a basic scheduler.

Echoing Phil_Lundeen’s perspective, our company has put a lot of resources into adopting and embracing flux even though we have legacy servers chugging along with 1.8’s influxql. This we did because influxdata org indicated through various channels that flux was “getting there”.

I understand hard decisions are hard by definition.

That said, what would be a path forward to allow for a community lift of getting flux to be a first-class language that supports FlightSQL and isn’t completely binned? There is a possibility of outside corporate effort being dedicated to this cause (just like it is done in other OSS projects like postgresql and linux).

2 Likes

Another community member recently expressed interest in working on Flux so that it could use FlightSQL as its native backend, which would make it possible to pair with InfluxDB 3.0 when it is available. In order to separate that out from the version of Flux that we run in production for our customers, we created a new community fork of Flux here: GitHub - InfluxCommunity/flux: This is the community fork of Flux, a lightweight scripting language for querying databases (like InfluxDB) and working with data. It's part of InfluxDB 1.7 and 2.0, but can be run independently of those.

Our goal is to have that be community owned and reviewed so that these kinds of big changes can be made without requiring us reviewing them because they have to run in our production environment.

I love Flux and since I gained a lot of experience with It, I will be hurt if its deprecated. :

please don’t break my heart :sob:. :rofl:

I recently started to poke around SQL and it’s a nightmare to me.

2 Likes

Like @fercasjr and many others, I spent time learning Flux because it was (and is) so powerful. I understand SQL is more widely adopted, but many of us have very complex data functions that I believe will be difficult to convert from Flux to SQL (if it’s even possible). I would hope that the absolutely stellar moderators on this forum who have helped so many learn Flux be able to assist us with moving from Flux to SQL. Maybe create a new forum tag or category for this purpose?

1 Like

Thanks @pauldix.

Would the recommended approach for continuous queries be to build a VM or container running scheduled queries as a cron job and moving existing continuous queries to that VM as InfluxQL queries, then upgrading to 3.0? (I’m currently running 1.8)

We rely a lot on Flux because it is very powerful and some things we do with it don’t appear to be possible with SQL to us.

1 Like

You’re absolutely right that there are many things you can do in Flux that you won’t be able to do in SQL. For extended functionality we’re going to look into a potential embedded VM (Python or Javascript) which, when paired with SQL would give all the functionality that Flux provides and more.

Can you tell me a bit more about how you’re using Flux? For the complex things that you don’t think can be represented in SQL, are those Tasks or just regular queries?

Yes, that would likely be the best way to do it. To get that functionality in 3.0 we generally recommend using the Python library and its FlightSQL capability to do the queries and then process whatever you need in Python (it’s a one liner to turn it into a dataframe for instance) and then write back to the database.

Coarsely speaking, our problem is that we monitor complexe systems whose state is distributed between channels (sensors) and time and we need to intelligently re-merge these streams of data at the server to create actionable information.

Let me give one such example: we monitor industrial data and while generating alerts at the source (e.g. sensor X emitting a message that component Y has failed is sometimes an option), we absolutely require that we be able to verify (functionally) that component Y has not failed.

In the above scenario, imagine we have signal and sense feedback on breaker A (what position it’s being controlled to be in versus what position it is in), that breaker powers motor B which starts engine C. Engine C might have an intelligent “failed to start” flag in it.

However, we want to be able to check continuously that if at time t0 A was given a close signal, at time t1 A’s feedback should read high, at time t2 motor B should be spinning (threshold rpm value), and at t3 engine C is running at designated RPM. This is one such thread of actionable information, but I could also be checking A and B combined with another effect D. And I could also be checking C against A’ and B’, meaning these aren’t readily pre-processable into their own “final destination” data streams. In other words, the problem isn’t merely that the data is ‘denormalized’.

All of these signals will be coming from completely heterogeneous sensors and we need to be able to time-align them and time-shift them.

We currently achieve this workflow by downsampling, change detecting, timeshifting, pivoting, forward filling, and doing a multi-column logic statement on the pivoted table, we then unpivot this into a time-series column that says “OK/NOK”. Doing these checks at ingest time is/can be painful because we get data arriving out of order on a 10-15 horizon. (although we do pre-process some of our data using skylark and imbue “last-seen state” into parallel streams).

Note: that there are too many combinations of usable threads of information for us to just “write them all back to the database”. We need to be able to query and explore.

I can easily imagine an example along these lines where chemical properties of a biological life-support system are monitored (e.g. aquaculture) where you have very long lag markers that need to be merged back into individual streams.

If I were to summarize it, it would be that our measurements are multiple semi-stochastic views of an underlying process that cannot be directly observed.

I cannot overstate how valuable the ability to diagnose these things are in our market segment.

As an aside: given my understanding of how flux works, is the push-down query not the only problem that needs solving? The in-memory part already works, does it not? Why reinvent that part in js if the in-memory part already works?

The value-add of flux is its syntax and logical design intent. I would be absolutely happy with a “sub-optimally” performing flux interpreter for tasks forcing me to correctly do my pushdown queries. Heck, as I type this right now, it occurs to me I’d be perfectly happy using flux using a sql.from() |> construct where the push-down query is explicitly passed down to the IOx server.

Is this for real? After all these years pushing people to adopt flux over *SQL, you are discontinuing it? That’s millions of hours of training time (and productivity) lost.

That’s the kind of pivot that can seriously shake consumer faith in a product.

4 Likes

Thanks @pauldix

Below is a query that we use to calculate the Overal Equipment Effectiveness (OEE) of a continuously fed furnace. The query below considers only those periods of time when the temperature is >= 1450 and when the feedrate is >= 0, but we may also throw in other filters like hourSelection(), exclude Saturdays or Sundays, or consider another operating parameter (flowrate of gas >= 100, pumpStatus = ON, etc.). We may also include a comparison of actual vs. setpoint to represent those times when we are changing over from one recipe to another.

We use this to calculate OEE on a daily, weekly, monthly or yearly basis.

PS: The reason we aggregate into 5 minute chunks is because the feeder system runs in 5-minute cycles, i.e. dispenses 65 lbs over 5 minutes, then sends a record to InfluxDB of how much was actually dispensed (eg. 66.7).

t1 = from(bucket: "FurnaceData")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "SingleRotaryFurnaceZoneData")
  |> filter(fn: (r) => r["EquipZone"] == "zone2")
  |> filter(fn: (r) => r["EquipNumber"] == "6")
  |> filter(fn: (r) => r["MeasType"] == "actual")
  |> filter(fn: (r) => r["_field"] == "temperature")
  |> map(fn: (r) => ({ r with AtOperatingTemp: if r._value >= 1450 then 
                "true"
            else
              "false",
        }),
    )
  |> aggregateWindow(every: 5m, fn: last, createEmpty: false)
  |> filter(fn: (r) => r["AtOperatingTemp"] == "true") 
  |> count(column: "AtOperatingTemp")


t2 = from(bucket: "FeedrateData")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "SingleRotaryFurnaceElementData")
  |> filter(fn: (r) => r["EquipNumber"] == "6")
  |> filter(fn: (r) => r["MeasType"] == "actual")
  |> filter(fn: (r) => r["_field"] == "Feedrate")
  |> filter(fn: (r) => r._value >= 0)
  |> map(fn: (r) => ({ r with FeederRunning: if r._value >= 1 then
                  "true"
            else
              "false",
        }),
    )
  |> aggregateWindow(every: 5m, fn: last, createEmpty: false)
  |> filter(fn: (r) => r["FeederRunning"] == "true") 
  |> count(column: "FeederRunning")


t3 = join(tables:{temperature:t1, feedrate:t2}, on:["EquipNumber"])
  |> map(fn: (r) => ({r with percent_true: float(v:r.FeederRunning) / float(v:r.AtOperatingTemp) }))
  |> drop(columns: ["FeederRunning", "AtOperatingTemp"])  
  |> yield(name: "percent_running")

I also come from the Industrial sector; the Time series works like a charm with Machinery.

Flux allows us to do something in-between between signal processing and data correlation. where relational databases and SQL does not make sense.

I divided most of my scripts into 2 use cases “Live data” and “post-processing.”

Each machine could have many sensors and process variables, Boolean, Integer, String, Int, and Float.
is not as simple as just data logging and using thresholds. I need to “mix” and match to determine efficiencies, anomalies, and trends.

One example is the “remaining time” dashboard that I did, there is a list that prompts the machines that are about to enter some specific stage that requires human intervention, and another list that shows the systems that are shutting down and the estimated time.

those times are based on a “Recipe” and the runtime. however, the UI in the machine allows the user to jump to the next step or tweak the times, also there is a certain automatic situation that will shut down the machine in a given situation, the shutdown process could last up to 14 hrs., but It may be fully shutdown before that time, and In a factory time cycle is important so if a machine finishes the shutdown process 5 hrs. early, then the next process can be restarted early.

what I do is read Error states, process variables, remaining recipe time, last recipe step, and batch Info and use all of that and its timestamps to determine the estimated time to shut down if something changes, I need to sort based on priority to let the operators know that given machine is ready to be loaded again.

for Post Processing I use Influx tasks

-Detect finished runs, push the results to a different database like an Index where later on I can get which machine time, and duration of the run.

-Also, Node-Red then uses the above information to execute some complex sequence of queries to get specific stats and save the results in the same “Indexed” runs Bucket.

and don’t get me wrong, this last bucket will be perfect for an SQL-like syntax. because the result is somehow a relational database, but flux scripting is what I do to build those results.

In my opinion, Flux as it is perfect, the only thing I lack is the option to filter based on dynamic variables, but Grafana and Node-Red compensate for that.

1 Like

Flux already has a way to operate as a process of its own that hits the DB remotely. But this API is the problem. For other data sources like sql.from, this is easy. We already have a FlightSQL method for that.

But most Flux scripts are written with the default built in from combined with the other operators against it. Those get combined in different ways to ensure that the DB only returns what the script wants.

Updating Flux so that all that default machinery works, but with the FlightSQL protocol is the work that would need to happen so that people could make the move without having to rewrite some portion of their Flux scripts.

Just making it work against the DB as a separate process is easy enough, but making it work with existing Flux scripts is the challenge.