throttling proxy implementation

284 views
Skip to first unread message

Sam Lee

unread,
Feb 5, 2013, 9:48:31 AM2/5/13
to openre...@googlegroups.com
Hey,

I need to set up a proxy where only one POST request should be sent to backend at a time. Other requests should be queued up as long as resource permits (memory).

Something like this is good:
http://wiki.nginx.org/HttpLimitReqModule
Except that it returns 503 right away when limit is reached.

So, I thought I could write something using lua ngix module.

Has anyone done something like this?
Where do I start?
Are there other load balancers or proxy servers that can queue up incoming requests and delegate to backend at certain rate?


I was thinking about using ngx.shared.DICT to store incoming requests and have another ngx.thread that pops request at a time and call ngx.location.capture(backend_uri).
But since I am targeting POST requests, not sure if it is feasible to read each request body and store them up in ngx.shared.DICT

I feel like I'm using the wrong tool or solution for a problem. Or, did not define the problem properly.
Basically, I have hypothesis that the backend (that I don't have control over) has issue with concurrent writes (handling POST requests). And, wanted to validate the hypothesis by sending concurrent POSTs to the proxy that delegates to the backend at a much slower rate.

What would you do if you have a slow backend that can only accept one request at a time, and clients are dumb enough that they don't retry on 503?

agentzh

unread,
Feb 5, 2013, 2:14:16 PM2/5/13
to openre...@googlegroups.com
Hello!

On Tue, Feb 5, 2013 at 6:48 AM, Sam Lee wrote:
>
> What would you do if you have a slow backend that can only accept one
> request at a time, and clients are dumb enough that they don't retry on 503?
>

I don't think you need to serialize your requests for just this.

One approach is to emulate a global lock in shdict and only the
request obtaining the lock can proceed and other requests have to wait
on the lock. The access_by_lua directive is your friend here and you
can put the lock-fetching and lock-waiting logic there. The logic for
releasing the lock should go to the log_by_lua.

See also this thread:
https://groups.google.com/group/openresty-en/browse_thread/thread/4c91de9fc25dd2d7/6fdf04d24f12443f

Best regards,
-agentzh

Sam Lee

unread,
Feb 5, 2013, 2:25:37 PM2/5/13
to openre...@googlegroups.com
Ah, thanks.
For now, I ended up using this: https://github.com/heyman/throttled-http-proxy/



--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



Sam Lee

unread,
Feb 8, 2013, 12:26:42 AM2/8/13
to openre...@googlegroups.com
One approach is to emulate a global lock in shdict and only the
request obtaining the lock can proceed and other requests have to wait
on the lock. The access_by_lua directive is your friend here and you
can put the lock-fetching and lock-waiting logic there. The logic for
releasing the lock should go to the log_by_lua.



How would you implement lock waiting?
Infinite loop polling for lock? 

Or, am I better off using redis?

access_by_lua = '
  local Redis = require('resty.redis')
  local redis = Redis:new()
  local ok, err, res
  ok, err = redis:connect('127.0.0.1', 6379)
  res, err = redis:subscribe('Chan')
  res, err = redis:read_reply()  -- waits indefinitely for message.
'


And, Redis:new()  and redis:connect() should go to init_by_lua ?

agentzh

unread,
Feb 8, 2013, 2:29:43 PM2/8/13
to openre...@googlegroups.com
Hello!

On Thu, Feb 7, 2013 at 9:26 PM, Sam Lee wrote:
>
> How would you implement lock waiting?
> Infinite loop polling for lock?
>

You can use a finite loop with ngx.sleep(0.001).

> Or, am I better off using redis?
>

It's up to you :)

> access_by_lua = '
> local Redis = require('resty.redis')
> local redis = Redis:new()
> local ok, err, res
> ok, err = redis:connect('127.0.0.1', 6379)
> res, err = redis:subscribe('Chan')
> res, err = redis:read_reply() -- waits indefinitely for message.
> '
>
>
> And, Redis:new() and redis:connect() should go to init_by_lua ?
>

No, never. See https://github.com/agentzh/lua-resty-redis#limitations

Best regards,
-agentzh

Sam Lee

unread,
Feb 13, 2013, 11:51:51 PM2/13/13
to openre...@googlegroups.com
Ouch. Since resty redis cannot be used in log_by_lua, I can't really use redis for this.

Is there other way?
Here is nginx.conf:

I am simulating a slow server at /x  and  / is proxy of /x .   
access_by_lua_file on /  sets up and blocks on redis list.   log_by_lua_file is supposed to unblock requests that are waiting.

Do I have to implement proxy_pass myself in lua? And, use the lua proxy implementation (that will send messages to redis upon completion) in  content_by_lua?

Is there proxy implementation? 




Best regards,
-agentzh

Sam Lee

unread,
Feb 14, 2013, 8:12:06 AM2/14/13
to openre...@googlegroups.com
Actually, I ended up using ngx.location.capture('/proxyslowapp')

and, 
location /proxyslowapp {
     proxy_pass 'http://slowapp';
}

One thing is that I am copying all headers:

local path = '/proxyslowapp' .. ngx.var.request_uri
local response = ngx.location.capture(path)
ngx.status = response.status
for k,v in pairs(response.header) do
    ngx.header[k] = response.header[k]
end
ngx.say(response.body)


and ngx.header.content_type after above for loop is indeed 'text/html'.
But, ngx.say()  actually puts  Content-Type: application/octetstream  , unless $remote_uri ends with .html 

Is it because I am including conf/mime.types  ?

agentzh

unread,
Feb 14, 2013, 2:26:45 PM2/14/13
to openre...@googlegroups.com
Hello!

On Thu, Feb 14, 2013 at 5:12 AM, Sam Lee wrote:
> Actually, I ended up using ngx.location.capture('/proxyslowapp')
>

Yes, this should work though you pay a price of buffering all the
response data in memory :)

> and ngx.header.content_type after above for loop is indeed 'text/html'.
> But, ngx.say() actually puts Content-Type: application/octetstream ,
> unless $remote_uri ends with .html
>
> Is it because I am including conf/mime.types ?
>

If the original response takes a Content-Type header, then that header
will be applied here because you forward all the response headers
explicitly by setting ngx.header.HEADER.

Otherwise, it is controlled by the "default_type" config directive
setting in your nginx.conf. See

http://wiki.nginx.org/HttpCoreModule#default_type

Best regards,
-agentzh
Reply all
Reply to author
Forward
0 new messages