Using ts.add with * and still getting Timestamp cannot be older than the latest timestamp

18 views
Skip to first unread message

Manish Sharan

unread,
Mar 6, 2020, 5:21:26 PM3/6/20
to RedisTimeSeries Discussion Forum
Hi 

I am using lua to acces the  timeseries module from Clojure


redis.call ('TS.ADD' , _:site-ticks-ts-total-key , '*', 10 ,'RETENTION', _:retention-period , 'LABELS' ,'siteid',_:siteid ,'event', 'device-tick-a')


This code is getting called in a loop . My code and Redis are both on the same desktop

Please note that when I modify my code to pass a incrementing long value, I do not see this issue.  I think this issue may be in how '*' is being substituted.  Also, please note that I call flushdb before every test run.


127.0.0.1:6379> ts.info si:5e46fab2c26e1f2e4b066cd8:dvc:ticks:total:ts
 1) totalSamples
 2) (integer) 98
 3) memoryUsage
 4) (integer) 4268
 5) firstTimestamp
 6) (integer) 1583531611346
 7) lastTimestamp
 8) (integer) 1583531630791
 9) retentionTime
10) (integer) 31536000000
11) chunkCount
12) (integer) 1
13) maxSamplesPerChunk
14) (integer) 256
15) labels
16) 1) 1) "siteid"
       2) "5e46fab2c26e1f2e4b066cd8"
    2) 1) "event"
       2) "device-tick-a"
17) sourceKey
18) (nil)
19) rules
20) (empty list or set)


Any ideas on how to fix this ?


Thanks

 



Ariel Shtul

unread,
Mar 8, 2020, 4:43:02 AM3/8/20
to RedisTimeSeries Discussion Forum
Hello Manish,

When you use '*', RedisTimeSeries requests from the system the current system clock.

RedisTimeSeries doesn't allow multiple values to be written at the same timestamp.

Therefore I assume the way you run your system, more than 1 calls are being received every millisecond which results with the correct and expected error "Timestamp cannot be older than the latest timestamp".

Since you are running them both on one machine you could ask for the system-time on the client-side, multiply it by 1000 then keep a counter for samples at every millisecond and add it. This will allow you to save multiple values. depending on your use cases, you will decode the timestamp when you read. 
 
cur_ts = time.time()

if last_ts == cur_ts:
    counter += 1
else:
    last_ts = cur_ts
    counter = 0

ts.add("series", last_ts * 1000 + counter, value)

Regards,

Ariel

Manish Sharan

unread,
Mar 9, 2020, 11:00:39 AM3/9/20
to RedisTimeSeries Discussion Forum
Hi Ariel
Thank you for your response. This was a test program I was writing to learn about the timeseries module. Your suggestion worked.

But I see an issue with the overall timeseries module.

supposing I am using this module to store events from client browsers as follows: 
1. Client browser sends an event to a my application
2. the application does  a ts.add to add to the event time series
3. there are n instances of the application for load balancing


It is possible to get an event at each of the instance of the appication at the same time  say 1583765086996.

Now if each of the instance of the application tries to do a ts.add  with 1583765086996 , only one app server would succesd, and the rest would fail. This is why I thought it made sense to pass "*" as the timestamp value.

But it seems that ts.add with * fails if multiple app instances call ts.add on the same timeseries within one milisecond . Redis typically has no issue with handling multiple calls within the same milisecond. (I love Redis for how fast handles wirte operations.)

So could I ask that the module timestampt conflict when passd "*"  by autmatically incrementing the timestamp by 1 ms . In my usecase , i am aggergating results over seconcds -- so deviation by milliseconds does not matter ( it might matter for scientifc calculations and robot traders). perhaps we could pass  "**" instead of "*" to let the module know that it can automatically increment the system timestamp value to resolve conflicts.






Ariel Shtul

unread,
Mar 9, 2020, 1:40:45 PM3/9/20
to RedisTimeSeries Discussion Forum
Hi Manish,

We are debating the question of whether to allow multiple values at a single timestamp for a long time or whether to allow insertion in the past. At the moment we have decided to not implement it.

I like your idea that "**" could mean MAX(systemTime, lastTimestamp + 1) though, with your use case, you may end up with a complete a-synchronization between timestamps and original timestamp.

You describe your use of needing several clients to split the load. But if you would make sure the same time-series are processed by the same client, you could keep the solution we discussed before.

Cheers

Arturo Mtz. Lavin

unread,
Mar 9, 2020, 1:55:15 PM3/9/20
to RedisTimeSeries Discussion Forum
I think adding multiple values to the same timestamp could have negative side effects. What would happen to the get command if the last point is a repeated timestamp? Also, adding this could affect the speed and add extra complexity? 

The same goes for insertion in the past, how would the double delta compression handle this? If this does get implemented I think having it as a configuration option would be best so as to no affect the rest of the user who don't need those features (in performance).

Regards.

Danni Moiseyev

unread,
Mar 9, 2020, 6:52:08 PM3/9/20
to Arturo Mtz. Lavin, RedisTimeSeries Discussion Forum
Manish,

You might want to try TS.INCRBY which is a better fit for your needs as you count events from multiple sources and you want to aggregate them. This command will allow you a more fitted solution. 

Danni.

--
You received this message because you are subscribed to the Google Groups "RedisTimeSeries Discussion Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redistimeseri...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/redistimeseries/a92f70ce-6d7a-43aa-b4e2-b3ee5447b687%40googlegroups.com.


Disclaimer

The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been automatically archived by Mimecast Ltd, an innovator in Software as a Service (SaaS) for business. Providing a safer and more useful place for your human generated data. Specializing in; Security, archiving and compliance. To find out more Click Here.

Reply all
Reply to author
Forward
0 new messages