datastore sharding counters

瀏覽次數:132 次
跳到第一則未讀訊息

nfi...@junctionbox.ca

未讀,
2017年11月29日 下午4:51:032017/11/29
收件者:google-appengine-go
Hi All,

Below is an article that I understand to be the canonical example for sharding counters with Google datastore.


I have 2 questions:
  1. Can you programmatically track Entity write contention? I would like to dynamically increase shard sizes as required to minimise the number of contended writes (e.g. threshold that can be increased based on median, 95PCTL, etc).
  2. Is it a bad idea to eagerly "preallocate" counters in the IncreaseShards function so that GetMulti can be used?
For pt 2 I would see IncreaseShards do the following in a transaction:
  1. Read current config.
  2. PutMulti of missing counters (num new - num current).
  3. Put new shard config.
And Count could replace the Query/Run with GetMulti which I assume is more efficient/faster.

Is it true that because eager loading is not presented as an option it is less robust and due to locking, latency, consistency/visibility, whatever?

Kind regards,
Nathan

Alexander Trakhimenok

未讀,
2017年12月1日 上午8:25:072017/12/1
收件者:google-appengine-go
The canonical Python code for the article does not use query and do use get_multi(). it looks like Java & Go codes in the article are not optimal.

Alex

Barry Hunter

未讀,
2017年12月1日 上午11:05:582017/12/1
收件者:nfi...@junctionbox.ca、google-appengine-go

  1. Can you programmatically track Entity write contention?
Not sure the specifics, but could track how many times the transaction is retried. 


 If the commit fails due to a conflicting transaction, RunInTransaction retries f

So could track retries, and if "many" increase number of shards. 

... but it seems it will add lots of overhead to the process. (ie will need somewhere to track number of retries (eg another sharded counter!), but maybe reasonable if use memcache, doesnt matter if data is lost) 


One thing could be worried, if there is a general failure, (ie failing for other reasons other than 'write contention') - might get 'exponential' growth in number of shards. Would need some sort of throttling to prevent lots of increments in quick succession (maybe use task queue to dedup the IncreaseShards calls. )



Nathan Fisher

未讀,
2017年12月1日 中午12:55:202017/12/1
收件者:Barry Hunter、google-appengine-go
Alex thanks for the tip! Takeaway always reference the Python code first? :)

Barry thanks! Don't want to over-provision the counters. Thinking something like a rolling window with regression. Anything significantly outside the predicted value would be treated as a spike. Anything within a given range would result in an increase by some amount. Will think about it some more.

Cheers,
Nathan
--
- sent from my mobile

Alexander Trakhimenok

未讀,
2017年12月8日 中午12:40:592017/12/8
收件者:google-appengine-go
Actually tracking transactions retries is pretty simple and is not big overhead. You can track it at instance level and then it will be very cheap (no memcache).

But looks a bit overenginering.

回覆所有人
回覆作者
轉寄
0 則新訊息