Is there something like a "back_off" for ndb.transaction's?

50 views
Skip to first unread message

Kaan Soral

unread,
Oct 28, 2016, 11:10:34 PM10/28/16
to Google App Engine
Often, when 3-4 transaction's hit my app at the same time, even with retries=6, they cause a contention exception

My custom solution to the issue would be, use retries=0, but manually implement a retry logic, like, sleep a random amount of time, retry

So basically, when 3-4 transaction's occur consecutively, this solution would un-tangle them under 60 seconds (assumption)

I'm just wondering whether this is a good solution and whether there is an existent solution I'm missing, basically, I want to be able to handle 5 consecutive transaction's

timh

unread,
Oct 29, 2016, 12:01:18 AM10/29/16
to Google App Engine
retrying https://pypi.python.org/pypi/retrying/1.3.3 provides decorators for different type of backoff retry scenarios.

However I would look at you data model to see why you  have such contention and what changes you could make to minimise this.

If you can't may also want to consider using a pull queue and use tasks to process the transactions

T

Kaan Soral

unread,
Oct 29, 2016, 10:29:19 AM10/29/16
to Google App Engine
Thanks for sharing retrying, indeed it's pretty interesting

My situation is pretty unique, that's why there are usually 3 consecutive requests in the worst case and at most 5-6, that's why an architecture change is an overkill

I had a lot of experience with both push queue and pull queue based solutions in the past, and I can safely say that, "better let them content"

In my opinion, both pull queues and push queues are extremely unreliable, so on this project, I made it so that the occasional contention is safe to occur

However, I hoped with retries=6 the transactions would untangle them on their own, but they don't

PK

unread,
Oct 29, 2016, 11:19:34 AM10/29/16
to google-a...@googlegroups.com
I have found  push task queues very reliable and use them all the time. 
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@googlegroups.com.
To post to this group, send email to google-a...@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-appengine/64683751-e390-4f16-ad0c-ec4a94246256%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Kaan Soral

unread,
Oct 29, 2016, 11:43:46 AM10/29/16
to Google App Engine
My initial problem with push queues was that they were using their retries completely internally, so once you get them to do "infinite retries", the losses became 0%

The problem with pull queues was throughput, sadly, pull queues are much better aimed to battle contention, especially when used together with

Ps. I assume, for my original problem, manual retrying is the simplest solution

What kind of a method with push queues do you use to solve the contention issue tho?
I forgot my original method that used only push queues, my current method uses pull queues for tasks, and push queues for processing, there are delays, there are losses (it's always an ongoing battle), that's why with my new project, I just kept it strictly to simple transactions

timh

unread,
Oct 30, 2016, 3:56:04 AM10/30/16
to Google App Engine
Consider using a roll forward model with tasks ensuring the transactions complete.

There is an old article by Nick Johnson covering the approach and I have used it to ensure reliable completion of a set of transactions (and that was in the day of 1min task deadline)


T
Reply all
Reply to author
Forward
0 new messages