If multiple opentsdb instances deployed how to keep uid assignment consistency?

92 views
Skip to first unread message

hai wang

unread,
Sep 19, 2017, 7:00:40 AM9/19/17
to OpenTSDB
There is no distribute lock used in opentsdb, so say if we deploy multiple opentsdb instances for writing, for same metric tagk or tagv, say metric: cpu.usage there may be multiple opentsdb assign unique id for the  cpu.usage, and both them get a unique id, so the uid will not consistent.

My concern is right?

John Humphreys

unread,
Oct 24, 2017, 11:00:54 AM10/24/17
to OpenTSDB
Does anyone know the answer to this?  It would be rather frightening if it is the case.

I frequently see it complain of UID collisions on creation when multiple writers are around and new metrics/tags are in the feed.

I kind of assumed it tried to create it, failed if it already existed, then tried to read the UID again afterward to get the official UID.  I haven't looked at the implementation though to see if it is safe/does this.

ManOLamancha

unread,
Jan 29, 2018, 8:45:15 PM1/29/18
to OpenTSDB
There is actually a distributed lock on the UID increment value in row "\x00" for each type (metric, tagk, tagv) in HBase and Bigtable. What happens is that TSD A will get a new string that needs a UID. It will perform a "get" for the string to UID mapping and if it doesn't find it, creates an atomic-increment call on the UID counter. In the mean time TSD B can perform the same task and may start the atomic-increment request. TSD A will execute a CAS mapping the string to the UID expecting the existing column to be null. It will successfully write the mapping. Then TSD B will also execute a CAS but it will find that A already wrote the mapping, so it discards the UID it had and reads the proper UID.

So if multiple TSDs get the same new strings, we can waste UID IDs because the UID counter is incremented, but we won't inappropriately write the same string with the wrong UID.
Reply all
Reply to author
Forward
0 new messages