Handling counter data in InfluxDB 2

Hello,

I have data that always increments and that looks like a counter. Each point is the last value + the increment.

Example:
I want to add 5 on each point.
My points should look like this: 0; 5; 10; 15; 20; 25 …

Prometheus counter would typically look like a good tool for my case, but I couldn’t find the influx equivalent.

I currently thought about 3 solutions.

  1. Cache
    Solution: Use a cache, that I initialise with the last value, and that I use to know the value to add in the DB.
    Drawback: The cache is a Single Point of Failure and I would prefer avoiding such a hard reliance on cache.

  2. No longer use counter when writing
    Solution: Only write the actual value in the Database and not the cumulative. I would then have to calculate the sum at read.
    Drawback: This seems expensive, especially as the amount of data grows. Since I would need to visualise data over a random period of time, that would also mean a lot of calculation.

  3. Query on every Write
    Solution: When writing data, I could query the last value each time to increment and then write the correct data in the Database.
    Drawback: This seems expensive at each request. I also wonder if it would have any issue with influxDB batching requests…

None of theses solutions seems optimal, but I might be wrong in my reasoning regarding the approach overall or the drawbacks of each solution.

I have data that always increments and that looks like a counter. Each point
is the last value + the increment.

Are you telling us that this is a description of the raw data you have, and
want to feed into Influx, or is this a description of some operation you want
to perform on raw data (which is, perhaps, just the increments themselves)?

Example:
I want to add 5 on each point.
My points should look like this: 0; 5; 10; 15; 20; 25 …

So, what is the raw data in this case? I can imagine either that it is the
series 0, 5, 10, 15… or that it is the series 5, 5, 5, 5…

Please tell us clearly:

  1. what your raw data is

  2. where you are getting this raw data from (some script, an existing process,
    a log file…)

  3. how you are feeding it in to Influx (telegraf perhaps?)

  4. what does the data actually mean - what sort of queries will you be doing
    on it afterwards

  5. do you have some policy on “resetting” the cumulative value to zero, or is
    the number simply required to increase into the future without limit?

Antony.

Hello, thank you for the answer and sorry I haven’t been clear enough in the first place.

1/
My raw data would look like 5, 5, 5, 5, 5.
In the end I want to use data that would look like 5, 10, 15, 20 when I retrieve it from the DB. But it doesn’t have to be stored in this format in influx (it could stay 5, 5, 5, 5, 5) as long as I can retrieve it as cumulative when I query.

2/ 3/
I am writing the data myself via influxdb-client-js (a script running permanently that will write data when some event I defined happen).
I don’t use Telegraf although I could if that would help my case?

So raw data are in the format 5, 5, 5, 5, 5 and I want to feed influx either by writing it as 5, 5, 5, 5, 5 OR as 5, 10, 15, 20. In the end I need to be able to retrieve it as cumulative.

4/
I want to be able to visualise the graph on any period of time as a graph to see the evolution of the data (always incrementing but the idea is to see the variations).
I also want to be able to get the last value (the current value for the data) often (more often that I will need to visualise the evolution as a graph).

I would potentially have a lot of point in the future. And the number of query to either get the last value or visualise data is not negligible as it would be directly from end users wanting to see the data.

5/
In my example the 5 increment was purely arbitrary, but in reality it can be any positive number. The cumulative will never reset and always increase without limit.

I’m not a moderator, but I’ve been studying these kinds of problems, too, and want to understand the problem better. First, I want to make sure I understand what you want. You want to write a number to InfluxDB and have InfluxDB automatically store the cumulative value? You use Prometheus as an example, but I’m not familiar with what Prometheus does. Can you describe the functionality?

I think we can agree that InfluxDB does not provide this functionality. So now you’re asking what’s the best choice.

One is to have your program write the cumulative value, but that would require it storing the previous value, and you don’t want to rely on that that.

The second possibility is to calculate the cumulative value every time you query the database. So, let’s say your datastream is called “data”, you’d have a query like this:
data |> cumulativeSum()
(see cumulativeSum() function | Flux 0.x Documentation)

The third possibility is to make the accumulation yourself. So you’d have a separate field or something with the cumulative value and get the last value as follows:
data |> last()
Then add that value to your current value and write it to the database. I agree that this seems expensive and complicated. You can set up a task to do this automatically, which would take the cumulative sum of the last hour’s data, for example, and add to it the last value in the previous hour’s cumulative data, and write that back to the database. It would be something like this:
lastval =
from(bucket: “sum_data”)
|> range(start -2h)
|> last()

from(bucket: “raw_data”)
|> range(start: -1h)
|> cumulativeSum()
|> map(fn: (r) => ({ r with _value: r._value + lastval }))
|> to(bucket: “sum_data”)

I think the second choice is best if you can be sure that cumulativeSum isn’t too expensive. My question is this: Is the cumulativeSum() function really too expensive to run on each query? What is too expensive for you, and can you test this somehow?

Hello, thanks for the response.

You want to write a number to InfluxDB and have InfluxDB automatically store the cumulative value?

Yes that’s exactly what I am trying to do. In fact as I said and as you understood, I don’t really need the cumulative when storing but I need it when retrieving data. I just thought storing as cumulative was the best choice

You use Prometheus as an example, but I’m not familiar with what Prometheus does. Can you describe the functionality?

I am not very familiar with prometheus either but I was referring to this feature that I found when looking for solutions: Metric types | Prometheus

I feel like you’ve understood the problem correctly, and what you said make sense overall.

For the second possibility (the cumulative sum). I might have ways to make tests but nothing that would be super conclusive.
Are there benchmarks or time estimate of various influxDB function with the number of points?

At display I want specifically to be able to:

  • display the evolution of the data globally, between 2 point in time (for instance in the month of january); But always based on the global value.
    So we can’t do the cumulative sum without first calculating the previous values for this period of time.
    This could be taking up to 1 second I guess and it would be fine.
  • display the actual total value. This should be almost instant.

For the total value display, I could actually have a cache but only when reading (not when writing) which could be pretty ok.

I just have no idea what the delay of these cumulative sum query would be. I could have really a lot of data.
I know I am also supposed to down sample the data which could help with that but I have no idea how to proceed with this aspect of influxdb yet.

The idea of using Task seems interesting, but I am not familiar at all. Would it be ok performance wise / would it be a good solution or not? It seems quite expensive as well but I can’t tell

I’m sorry for not replying sooner, but I don’t know how to advise you and wanted to think about it first.

The first thing I need to say is that I made a mistake. You want the cumulative value from the beginning. You would need to call cumulativeSum on the all the data from the series you’re calling it on. I think that’s a bad idea. First, it will take longer and longer as time goes along. At first it may not be by much, but after year it may be a problem. It just doesn’t seem rational. The bigger problem, which I didn’t even think about, is that querying old data is very expensive. I think the shard duration is at most seven days, even with an infinite retention policy, which means InfluxDB will have to uncompact all that data. That’s very bad.

I recommend setting up a task that runs every hour or even every ten minutes. I can’t tell you anything about the performance. I asked this question once, but I don’t remember the answer. You’d have to look at everything that is going on your database and ask yet one more query, the task, will impact performance, and if it makes a difference if it every ten minutes over ten minutes of data or every hour over an hour of data. I think you need to do your own testing, because I haven’t seen performance data that can be used as a guide.

We can think about this together, but I don’t have any expert advise. You can share your ideas about how to set up the task and how to test it.

Thanks for your answer @simon38 and sorry for the late reply.
After thinking about that again, I realised the problem is more complex. In fact the example I gave above is a very simplified version of our use case.

If we were to have a Counter data structure in InfluxDB I would need something that looks like this:

nextIntCounter("tag", valueToIncrease, filterFunction)

where the filterFunction would get the last value for "tag" based on the parameters.

This makes it a bit harder than simple counters. They don’t simply depends on the last value but on more complex parameters.
This also makes sub request on every write irrelevant (because it would mean at least 3 sub request each time in my case).

As for Tasks, I am not sure it’s the best idea either as it would be most likely complex as well and be expensive?.

I think my best option is still a cache. I just need to make it super robust and accept that I don’t have any other choice than this single point of failure.

Maybe a staff can add more input on this discussion (cc @Anaisdg, sorry for the ping).

Hello.

I think I am quite late to party, but I am trying to solve similar problem.
I need to have unique identifier of each row. However, after a short research, I found out there is no hidden row_id or anything similar in InfluxDB.

So, the solution I am going to implement is, to use another, external DB engine to generate unique Id for me.

I am quite familiar with PostgreSQL. So, what I would do is to create sequence in PostgreSQL (dont have Postgres in my current env, but I do have SQLite, which hopefully has this capability).

The workflow now is:
1.) Get data from my python app.
2.) Transform data into data points, one row at a time.
3.) When creating each data point for Influx, call another DB method, that will call into SQL, and request next value from sequence.
4.) Add this value to datapoint.
5.) When all data points are prepared, write data.

Its not as much errorproof as it would be if sequence was internal function of InfluxDB, but its best I can think of.