LUA upstream server IP and Port

443 views
Skip to first unread message

luis....@interactive3g.com

unread,
Jan 22, 2013, 11:12:20 AM1/22/13
to openre...@googlegroups.com
Hi,

I am using open-resty-memcached to push some data to a kestrel server. This is working just fine.

Now I want to add a second kestrel server and load balance requests between them. To do this, I am using nginx standard conf:

upstream DSR2 {
}

upstream DSR2 {

upstream_list DSR_CLUSTER DSR1 DSR2;

Then I have,

server {
...
set_hashed_upstream $kestrel DSR_CLUSTER "12" -- 12 is just for testing 
content_by_lua '
 ngx.log(ngx.CRIT,ngx.var["backend"])
';
 }
 

In the nginx log file I am getting "DSR2" as a string. However, I want to access both IP and Port of the server DSR2 to use it in memc:connect() call

Any hint?
Thanks,
Luis
 
 

luis....@interactive3g.com

unread,
Jan 22, 2013, 12:13:28 PM1/22/13
to openre...@googlegroups.com
(typo in original post)

 ngx.log(ngx.CRIT,ngx.var["backend"])
should read
 ngx.log(ngx.CRIT,ngx.var["kestrel"])

agentzh

unread,
Jan 22, 2013, 2:36:14 PM1/22/13
to openre...@googlegroups.com
Hello!

On Tue, Jan 22, 2013 at 8:12 AM, luis.gasca wrote:
> set_hashed_upstream $kestrel DSR_CLUSTER "12" -- 12 is just for testing
>
> content_by_lua '
>
> ngx.log(ngx.CRIT,ngx.var["backend"])
>
[...]
> In the nginx log file I am getting "DSR2" as a string. However, I want to
> access both IP and Port of the server DSR2 to use it in memc:connect() call
>

No, there's no Lua API to access the internal details of a specific
upstream. You're recommended to do the upstream hashing (or sharding
in general) completely in Lua. For example,

local servers = {
{"192.168.1.226", 22133},
{"192.168.1.227", 22133}
}
local server = hash_my_servers(servers, my_key)
local ok, err = memc:connect(server[1], server[2])

where you just need to write a simple hash_my_servers Lua function.
You can use modulo or consistent hashing or any other sharding
algorithm that you like ;)

Best regards,
-agentzh

luis....@interactive3g.com

unread,
Jan 22, 2013, 2:44:03 PM1/22/13
to openre...@googlegroups.com
Hi,

That´s what I was thinking, but I am completely new to both nginx and lua and my head was spinning.

btw, thanks for such a great nginx bundle. Our core software (SMS gateway) is going to be a lot simpler with openresty in front of it. We are updating it to handle all the incoming calls from sms aggregators and publishing an unified json model into our core system.

Best regards,
Luis

agentzh

unread,
Jan 22, 2013, 2:50:17 PM1/22/13
to openre...@googlegroups.com
Hello!

On Tue, Jan 22, 2013 at 11:44 AM, wrote:
> That´s what I was thinking, but I am completely new to both nginx and lua
> and my head was spinning.
>

I know, I know :) The following slides for my nginx/lua talks can be helpful:

http://openresty.org/#Presentations

and these articles and ebooks too:

http://openresty.org/#Resources
http://openresty.org/#eBooks

When in doubt, please don't hesitate to ask here :)

> btw, thanks for such a great nginx bundle. Our core software (SMS gateway)
> is going to be a lot simpler with openresty in front of it. We are updating
> it to handle all the incoming calls from sms aggregators and publishing an
> unified json model into our core system.
>

Great! I'm feeling honoured :)

Thanks!
-agentzh

luis....@interactive3g.com

unread,
Jan 23, 2013, 11:04:40 AM1/23/13
to openre...@googlegroups.com
Thanks for your pointers. Much clearer now.

I have a first working version. There are still some rough edges (error checking ..), but I would like to have your opinion in the general structure

Requirements are simple:

1. Get some request from external provider
2. Extract some parameters and map them to some values
3. Encode a Json document
4. Push it to our kestrel cluster (memcached protocol)
5. Return the string "OK" (text/plain) to provider.

worker_processes  4;
error_log logs/error.log;
events {
    worker_connections 1024;
}

http {
        map $nativeStatus $TR18mappedStatus {
                default NACK_FATAL;
# more values here ...
        }
        map $nativeError $TR18mappedError {
                default 'Internal non mapped error';
# more values here ...
}
        upstream dsrcluster {
                server 192.168.1.226:22133;
                server 192.168.1.227:22133;
                keepalive 10;
        }
        server {
                listen 8080;
                location /tr18 {
                        default_type text/plain;
                        set $nativeStatus $arg_smsid;
                        set $nativeError $arg_smsid;
                        set $memc_value '{ "tr" : 18, "id" : "${arg_smsid}", "state": "${TR18mappedStatus}", "nativeStatus": "${nativeStatus}", "nativeErrorMessage" : "${nativeError}" }';
                        content_by_lua '
                                local res = ngx.location.capture("/kestrel", {args = { val = ngx.var.memc_value }})
                                ngx.say("OK")
                        ';
                }

                location = /kestrel {
                        set $memc_key 'sirocco_dsr';
                        set $memc_cmd 'set';
                        set_unescape_uri $memc_value $arg_val;
                        memc_pass dsrcluster;
                }

        }
}

I have some doubts about using memc inside a lua location.capture() call, but I felt it was cleaner to select the back-end nginx way (btw, do not need any consistent backend selection; round robin works well in my case)

Thanks again.
Luis

agentzh

unread,
Jan 23, 2013, 2:15:51 PM1/23/13
to openre...@googlegroups.com
Hello!

On Wed, Jan 23, 2013 at 8:04 AM, luis.gasca wrote:
> I have some doubts about using memc inside a lua location.capture() call,
> but I felt it was cleaner to select the back-end nginx way (btw, do not need
> any consistent backend selection; round robin works well in my case)
>

Well, you're recommended to use lua-resty-memcached instead of
ngx.location.capture + ngx_memc because

1. lua-resty-memcached should usually be more efficient, both
memory-wise and CPU-wise. (You're encouraged to compare it yourself.)

2. You can have better error handling capability with
lua-resty-memcached because you have detailed error messages right on
the Lua land.

3. You have more freedom with the backend node management in
lua-resty-memcached, like adding or removing memcached servers
on-the-fly (with ngx_memc, you have to configure a fixed range of
memcached backend servers in nginx.conf).

Best regards,
-agentzh

luis....@interactive3g.com

unread,
Jan 23, 2013, 2:19:18 PM1/23/13
to openre...@googlegroups.com
Hi,

yeap, 3 good reasons. I have just benchmarked my working solution, so I will go ahead and change to lua-resty-memcached and compare.

Thanks,
Luis
Reply all
Reply to author
Forward
0 new messages