RateLimiter: possibility of sharing rate limits between multiple instances

2,319 views
Skip to first unread message

Martin Schayna

unread,
Sep 19, 2012, 6:08:50 AM9/19/12
to guava-...@googlegroups.com
Hi,

I have done my own pretty simple rate limiting for my API, it shares rate between multiple instances of API servers behind load balancer. Counters are shared on Memcached server where key names are composed of two parts: token and time interval; token identifies user across all API server instances and time interval is id for 1 minute time window. Each server instance caches own counters (via guava's Cache) and several times in minute synchronizes them with Memcached.

I like idea behind guava's new RateLimiter, especially estimating time for next counter acquire.

But I don't see any chance for sharing rate limits across two or more instances of RateLimiter. Am I right?

Thanks,
Martin Schayna

Dimitris Andreou

unread,
Sep 20, 2012, 2:20:59 PM9/20/12
to Martin Schayna, guava-...@googlegroups.com
Hi,

Sorry, I can't quite get at what you're asking exactly, can you clarify with a concrete example? What kind of behavior do you want to achieve?

Thanks,
Dimitris

Martin Schayna

unread,
Sep 23, 2012, 7:57:32 AM9/23/12
to guava-...@googlegroups.com, Martin Schayna
Hi,

I have REST server (based on Jersey implementation, but it does not matter) and I want to limit user's requests traffic. My server runs on AWS cloud behind ELB load balancer, so there are multiple instances of server and load balancer routing traffic from users to these instances. Because API client usually doesn't use ELB feature "sticky session", it is pretty regular that subsequent requests from client are processed on multiple different instances of API server, so when each server instance will have own RateLimiter class instance. Limiting could be inaccurate (more "benevolent").

It's clear to me that to handle this, RateLimiters should "cooperate" across server instances. I'm not sure but it looks like this scenario is little bit "out of scope"... please forget my issue and keep RateLimiter simple as is :-)

Keep up the good work, guys. 

M.

Dimitris Andreou

unread,
Sep 24, 2012, 12:26:36 AM9/24/12
to Martin Schayna, guava-...@googlegroups.com
Yes, a distributed throttling solution is very complicated and open ended domain, beyond what guava strives to provide. We were very conscious about avoiding any complexity beyond just a low-level throttling primitive :)

Thanks,
Dimitris

Gregory Kick

unread,
Sep 25, 2012, 1:16:45 PM9/25/12
to Dimitris Andreou, Martin Schayna, guava-discuss
It sounds like https://wiki.corp.google.com/twiki/bin/view/Main/GdhQuotaServer might be a better option for you.
Greg Kick
Java Core Libraries Team

Gregory Kick

unread,
Sep 25, 2012, 1:21:14 PM9/25/12
to Dimitris Andreou, Martin Schayna, guava-discuss
Oops.  Got my Google and non-Google lists confused.  Obviously that link won't me be much help to anyone.

Dimitris Andreou

unread,
Apr 1, 2013, 7:10:12 PM4/1/13
to Kevin Su, guava-discuss, Martin Schayna
This could very well be a can of worms. :-/
Not just because supporting serialization can be a pain. 

RateLimiter assumes a fixed, monotonic source of time readings. For example, "-read() + read()" can never be negative. If you serialize the internal state, then this assumption breaks -- a RateLimiter can go back in time. One would have to read carefully the code to see what kind of problems that might surface. 

As for ideas, couldn't you put a "qps allocation" variable in memcache, and design it as a semaphore, where you grab permits (qps) that you are supposed to release back after some interval? (To avoid starvation of other clients, and be able to build some fairness). RateLimiter would just help you throttle given that temporary rate allocation, but I don't see how it could do much more than that


On Mon, Apr 1, 2013 at 3:44 PM, Kevin Su <ksu...@gmail.com> wrote:
Hi,

Sorry to resurrect an old thread, but I was wondering if it makes sense to make the RateLimiter class serializable?  The reason I ask is because like the OP, I am trying to throw together a quick distributed rate limiting solution but want to include the logic that the RateLimiter class provides, like estimating the next acquire.  I was exploring the possibility of storing RateLimiter objects in a distributed memcache (hence the need for serialization).  Any advice on whether this is a reasonable solution?

Thanks for the help!

-Kevin

--
--
guava-...@googlegroups.com
Project site: http://guava-libraries.googlecode.com
This group: http://groups.google.com/group/guava-discuss
 
This list is for general discussion.
To report an issue: http://code.google.com/p/guava-libraries/issues/entry
To get help: http://stackoverflow.com/questions/ask (use the tag "guava")
 
---
You received this message because you are subscribed to the Google Groups "guava-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guava-discus...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

vik2...@gmail.com

unread,
Dec 27, 2017, 3:31:44 PM12/27/17
to guava-discuss
Hey Martin,

Can you please share your JAVA implementation for using AWS memchached for RateLimiting your API ?
Reply all
Reply to author
Forward
0 new messages