How to get epoch time as long accurately and efficiently in java using influxdb-java?

Hello all,

I am currently using influxdb-java v2.17 to connect to an instance of InfluxDB v1.8.2

My data is currently stored with nanosecond level precision epoch (ex: 1598647940000078125). Originally, I was querying the data using something like the following:

QueryResult queryResult = connection.query((Query) query, TimeUnit.NANOSECONDS);

But this would return a double in which the last few nanoseconds would be truncated. For example, using the above epoch, in java double it would be stored as 1.59864794000007808E18, and thus when I’d convert to a long, it would get converted as 1598647940000078080 and not 1598647940000078125.

My work around was to remove the TimeUnit.NANOSECONDS part in the query so that I could get the epoch as a string of the form YYYY-MM-DDTHH:MM:SS.NNNNNNNNNZ, which I could then turn into a java instant like so:

Instant instant = Instant.parse(timeString);
long epoch = instant.getEpochSecond()*1000000000 + instant.getNano();

But for thousands of points, I believe this has quite a performance hit.

What then, is the most efficient and accurate way to obtain this timestamp? Thanks.

Hi @generic_coder -

I think you’ve found a limitation of the java library. It’s parsing the ns timestamps into doubles when it should be longs. I’d parse the RFC3339 timestamp myself; see InfluxDB API reference | InfluxDB OSS 1.8 Documentation

If you run into a performance problem, we can work on that when you get to it but it doesn’t sound like you have a current performance issue.

Hello and thanks for your response. Do you mean manually convert the the RFC3339 timestamp to a unix nanosecond precision epoch using custom logic? I did not find anything on the link you provided which pertained to that.

To put it into context, currently, to parse 2,304,000 records’ timestamps (using my above code), it took 69.64s, whereas it only took 2.08s to extract the value from said records–so it took almost 33.5x times as long just to convert the timestamps to something usable than it did to extract the field of interest, that is too slow for what I need.

Could you detail a little more what you mean by “parse the timestamp yourself”? Thank you.

Sorry for my confusing wording.

For

parse the timestamp yourself
I meant what you are doing already with Instant instant = Instant.parse(timeString).

I understand that is too slow for you. The double return type is an issue with json library influxdb-java uses when it is parsing the json result from the /query endpoint: see Get integer data from InfluxDB · Issue #153 · influxdata/influxdb-java · GitHub.

Since you are using InfluxDB v1.8+, you could switch to the influxdb-client-java library. The downside for you is that you will need to convert your queries to Flux. This newer java client doesn’t support InfluxQL (as far as I can tell) even though v1.8+ can accept either Flux or InfluxQL queries.

I don’t know your use case but maybe another precision (microseconds?) would be ok?

I’m sorry I haven’t seen a better option for you here. Both of these java libraries are community maintained. You might have some luck making an issue ticket and you can always implement the code you need and submit it as an improvement to the library too.

If you figure something out here that works for you, please let us know. The forum would benefit from any solution you find.