Schema for potential duplicate entries

I am storing stock orders and trades which have a unique ID. Everything I am reading suggests to not use unique a ID as a tag due to high cardinality. However, sometimes I get real entries with same tag set and same nanosecond timestamp. One solution suggested on the internet is to increment the nanosecond timestamp but a lot of the reason for using this database is so that we can do analysis on nanosecond timings and fudging the timestamp goes against that. The other solution I see is that we add a fudging tag but that is a pain because we would then have to determine duplicates in our ingesting system and add the fudge tag prior to inserting.
Any other options out there?

Hello @Iofogo,
Welcome. I’m having trouble understanding your problem, can you help me understand better? What do you mean by “sometimes you get real entries with same tag” and what do you mean by “add a fudging tag but that is a pain because we would have to determine duplicates”. What are real entries vs false entries? What is a fudging tag? Why. would you have duplicates and why do you have to determine them prior to inserting?
Thank you.

I have data points with the exact same timestamp and exact same set of tags. Ordinarily the second point would overwrite the first in this case. But they are two distinct data points. So in order to distinguish them I have to differentiate them in some manner.

Yes you’ll want to include another tag for your “duplicate” point then. Where is your data coming from? Why is it hard to just add a tag to points that are “duplicate”?