Lua cosocket API in header and body filters

484 views
Skip to first unread message

avi...@adallom.com

unread,
Mar 12, 2013, 12:01:27 PM3/12/13
to openre...@googlegroups.com
Hi,

Can anyone elaborate why cosockets do not currently work in Lua's header and body filters?
Will this feature be added in the future?

Thanks...

agentzh

unread,
Mar 12, 2013, 2:58:05 PM3/12/13
to openre...@googlegroups.com
Hello!

On Tue, Mar 12, 2013 at 9:01 AM, aviram wrote:
> Can anyone elaborate why cosockets do not currently work in Lua's header and
> body filters?

Because Nginx's output header and body filter chains are implemented
as C function calling chains in the Nginx core. And unlike Nginx's
rewrite, access, and content phase handlers, such C function calls do
not support execution suspension required by nonblocking I/O.
Therefore we can do nothing about that on our side.

> Will this feature be added in the future?
>

If and only if we have a new output filter mechanism in the Nginx core
:) Not sure if Nginx 2.0 will have this though. Redesigning the output
filter mechanism also means redesigning all those C modules that use
output filters.

Best regards,
-agentzh

Brian Akins

unread,
Mar 12, 2013, 6:29:50 PM3/12/13
to openre...@googlegroups.com
When we do our content generators in Lua, we just do our own "filter
stack." Instead of calling ngx.print, we just call our own function
that runs it through a table of functions. Of course, this only works
because we generate the content with Lua :)

--Brian

agentzh

unread,
Mar 12, 2013, 6:38:50 PM3/12/13
to openre...@googlegroups.com
Hello!
Yes, this should work perfectly for it's all in Lua :)

The hard part is when running Lua to filter contents generated by
other Nginx C modules like ngx_proxy :)

Best regards,
-agentzh

Brian Akins

unread,
Mar 12, 2013, 6:41:10 PM3/12/13
to openre...@googlegroups.com
On Tue, Mar 12, 2013 at 6:38 PM, agentzh <age...@gmail.com> wrote:
> The hard part is when running Lua to filter contents generated by
> other Nginx C modules like ngx_proxy :)

Working to get rid of that in our environment ;)

Ron Gomes

unread,
Mar 13, 2013, 7:32:23 AM3/13/13
to openre...@googlegroups.com
Assuming that you're not being facetious: when you say "our" environment, do you mean "your" specific environment, or do you mean the suite of tools/modules that are used with ngx_lua?

If there's a practical caching alternative to ngx_proxy that can be used with nginx, either proposed or in development, I'd love to hear more about it.

Peter Booth

unread,
Mar 13, 2013, 10:17:14 AM3/13/13
to openre...@googlegroups.com, openre...@googlegroups.com
I don't know enough to have an opinion on cosockets. I can say that ngx_lua and other openresty modules have proven to be critical parts of leveraging ngx_proxy. I couldn't have implemented a bunch of complex cache strategies without openresty.

Sent from my iPhone
--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

agentzh

unread,
Mar 13, 2013, 3:10:13 PM3/13/13
to openre...@googlegroups.com, Ray Bejjani
Hello!

On Wed, Mar 13, 2013 at 4:32 AM, Ron Gomes wrote:
> Assuming that you're not being facetious: when you say "our" environment, do
> you mean "your" specific environment, or do you mean the suite of
> tools/modules that are used with ngx_lua?
>

I believe Brian meant his own company's specific environment :)

Actually one of the main development goals of ngx_lua is to scripting
the exsiting Nginx ecosystem, including ngx_proxy :) (The other one is
to build complete web apps.)

> If there's a practical caching alternative to ngx_proxy that can be used
> with nginx, either proposed or in development, I'd love to hear more about
> it.
>

My colleague Ray Bejjani has been working on an Nginx C module that
extends the ngx_lua module to expose a Lua API for manipulating the
http cache used by ngx_proxy. It's been a company project at
CloudFlare and he says he's going to opensource it.

I've cc'd Ray and I'll try to get him reply here ;)

Best regards,
-agentzh

Brian Akins

unread,
Mar 13, 2013, 6:50:19 PM3/13/13
to openre...@googlegroups.com

On Mar 13, 2013, at 7:32 AM, Ron Gomes <rgo...@consumer.org> wrote:

> Assuming that you're not being facetious: when you say "our" environment, do you mean "your" specific environment, or do you mean the suite of tools/modules that are used with ngx_lua?

My specific environment. We've been using shared_dict and/or redis for most of our caching needs rather than the standard nginx cache (proxy/fastcgi, etc). This gives use much more control over the entire process.

I looked at writing a Lua interface to the standard nginx caches and found there are a ton of assumptions those modules make that didn't fit our use cases.

--Brian

James Hurst

unread,
Mar 14, 2013, 5:45:54 AM3/14/13
to openre...@googlegroups.com
 On 13 March 2013 19:10, agentzh <age...@gmail.com> wrote:
 
My colleague Ray Bejjani has been working on an Nginx C module that
extends the ngx_lua module to expose a Lua API for manipulating the
http cache used by ngx_proxy. It's been a company project at
CloudFlare and he says he's going to opensource it.

This is really interesting to me, would love to hear more Ray ;)

I'll be pushing out a new version of the Ledge module soon (https://github.com/pintsized/ledge), with lots of new features for caching with ngx_lua, but we're currently still storing responses in Redis (so in memory), which obviously doesn't suit all cases. I've been interested in hybrid solutions (this goes back to the previously discussed question of a streaming api for ngx.location.capture), but perhaps having an option to manipulate the built in cache could be pragmatic.


On 13 March 2013 22:50, Brian Akins <br...@akins.org> wrote:
 
I looked at writing a Lua interface to the standard nginx caches and found there are a ton of assumptions those modules make that didn't fit our use cases.

Yes, I ended up in the same place. It's surprising how many assumptions there are, which is probably just because covering all cache patterns properly can be fiddly, and this perhaps is much easier to express in Lua. I feel like I'm finally getting somewhere with 'Ledge', but there are still many cases not covered.

Anyway, I'm definitely interested to see what Ray has developed.

-- 
James Hurst

Brian Akins

unread,
Mar 14, 2013, 8:41:45 AM3/14/13
to openre...@googlegroups.com
On Thu, Mar 14, 2013 at 5:45 AM, James Hurst <ja...@riverratrecords.com> wrote:
> I
> feel like I'm finally getting somewhere with 'Ledge', but there are still
> many cases not covered.

Cool. I played with some of the early versions.

If we ever have an "openresty conference" or BoF at another
conference, I'd like to talk about some of the stuff we've been
playing with. Yeah I could do it in a blog post, but I'm lazy and an
openresty conference (or a general nginx conference or track at a
larger conference) would be very cool :)

--Brian

Ron Gomes

unread,
Mar 14, 2013, 9:27:31 AM3/14/13
to openre...@googlegroups.com, Ray Bejjani
On Wednesday, March 13, 2013 3:10:13 PM UTC-4, agentzh wrote:
My colleague Ray Bejjani has been working on an Nginx C module that
extends the ngx_lua module to expose a Lua API for manipulating the
http cache used by ngx_proxy. It's been a company project at
CloudFlare and he says he's going to opensource it.

I've cc'd Ray and I'll try to get him reply here

Yes, I'd love to hear more about this.  It sounds like it might be ideal for our use case.  In order to replicate, with nginx, the caching environment that we're currently using with Apache--which even there requires some custom changes to mod_cache and htcacheclean--I'm going to have to look at modifying the Cache Purge module so that it will support expiration and not just purging (deletion) of cache entries.

But if the ngx_proxy cache were manipulable in ngx_lua it would probably be trivial to implement the additional functionality that we need, without using the Cache Purge module at all.

Ray Bejjani

unread,
Mar 14, 2013, 11:26:39 AM3/14/13
to openre...@googlegroups.com
Hello!
Apologies for the late reply, I should have paid more attention to the mailing list.

On Thu, Mar 14, 2013 at 6:27 AM, Ron Gomes <rgo...@consumer.org> wrote:
Yes, I'd love to hear more about this.  It sounds like it might be ideal for our use case.  In order to replicate, with nginx, the caching environment that we're currently using with Apache--which even there requires some custom changes to mod_cache and htcacheclean--I'm going to have to look at modifying the Cache Purge module so that it will support expiration and not just purging (deletion) of cache entries.

But if the ngx_proxy cache were manipulable in ngx_lua it would probably be trivial to implement the additional functionality that we need, without using the Cache Purge module at all.
I should preface by saying that this module never saw real production; the code was never actually enabled (though writing it was a good way to learn about nginx and lua at the same time!). It had a very simple goal to begin with: Allow us to manipulate the internal expire time to control how long a resource is actually cached. It's been a while (and I still need to find the code) but I think it exposed the r->cache field on the request object. It assumes it's running in a filter (or a context that has r->cache possibly set). It also handled all shared memory locking for the shared sections (like the cache it came from and it's fs_size) so that lua-space didn't need to worry. One side-effect was that it batched all operations, so you called something like a getMetaData/setMetaData pair of functions that gave you a table with all the fields. Thinking about it now, a major flaw was that while we avoided races via the locks, we didn't avoid multiple requests clobbering the same cache entry from different requests across workers,

I'll try to find the code, since that will likely be a better explanation than a long paragraph. In short, it let you read and set almost all of r->cache.
--
Ray Bejjani

James Hurst

unread,
Mar 14, 2013, 1:35:12 PM3/14/13
to openre...@googlegroups.com
On 14 March 2013 12:41, Brian Akins <br...@akins.org> wrote:
 
If we ever have an "openresty conference" or BoF at another
conference, I'd like to talk about some of the stuff we've been
playing with. Yeah I could do it in a blog post, but I'm lazy and an
openresty conference (or a general nginx conference or track at a
larger conference) would be very cool :)

Yeah, nice idea. We might have to just use email for now though ;) If anyone is in London I'm definitely up for openresty chats over beers. I suspect we're all quite spread out though...

agentzh

unread,
Mar 15, 2013, 6:25:55 PM3/15/13
to openre...@googlegroups.com
Hello!

On Thu, Mar 14, 2013 at 2:45 AM, James Hurst wrote:
>
> This is really interesting to me, would love to hear more Ray ;)
>
> I'll be pushing out a new version of the Ledge module soon
> (https://github.com/pintsized/ledge), with lots of new features for caching
> with ngx_lua, but we're currently still storing responses in Redis (so in
> memory), which obviously doesn't suit all cases. I've been interested in
> hybrid solutions (this goes back to the previously discussed question of a
> streaming api for ngx.location.capture), but perhaps having an option to
> manipulate the built in cache could be pragmatic.
>

Just out of curiosity, have you checked out my ngx_srcache module? ;)

http://wiki.nginx.org/HttpSRCacheModule

This is yet another caching framework for arbitrary Nginx responses
similar to Apache's mod_cache, supporting various different caching
storeage like Memcached, Redis or anything else via Nginx subrequests.

To make Nginx's built-in http cache fully scriptable by Lua, I'm
afraid. it'll require extra hooks in ngx_http_upstream. Maybe we can
get further by collaborating with the Nginx company :)

Best regards,
-agentzh

agentzh

unread,
Mar 15, 2013, 6:28:41 PM3/15/13
to openre...@googlegroups.com
Hello!

On Thu, Mar 14, 2013 at 5:41 AM, Brian Akins <br...@akins.org> wrote:
> If we ever have an "openresty conference" or BoF at another
> conference, I'd like to talk about some of the stuff we've been
> playing with. Yeah I could do it in a blog post, but I'm lazy and an
> openresty conference (or a general nginx conference or track at a
> larger conference) would be very cool :)
>

CloudFlare is going to organize monthly ngx_lua/openresty meetup in
its workplace at San Francisco. Will you be interested to join? ;)

Really expected to see you all in person!

There's no timeline yet and we're still thinking about it :)

Best regards,
-agentzh

Brian Akins

unread,
Mar 19, 2013, 3:19:09 PM3/19/13
to openre...@googlegroups.com
On Fri, Mar 15, 2013 at 6:28 PM, agentzh <age...@gmail.com> wrote:
> CloudFlare is going to organize monthly ngx_lua/openresty meetup in
> its workplace at San Francisco. Will you be interested to join? ;)

I'll be in SF in April for http://chefconf.opscode.com/
Maybe I can stop by then.

agentzh

unread,
Mar 19, 2013, 3:31:25 PM3/19/13
to openre...@googlegroups.com
Hello!

On Tue, Mar 19, 2013 at 12:19 PM, Brian Akins <br...@akins.org> wrote:
> I'll be in SF in April for http://chefconf.opscode.com/
> Maybe I can stop by then.
>

That's great! See you then :)

Best regards,
-agentzh

Ray Bejjani

unread,
Mar 29, 2013, 7:21:16 PM3/29/13
to openre...@googlegroups.com
Hello again,

On Thu, Mar 14, 2013 at 8:26 AM, Ray Bejjani <r...@cloudflare.com> wrote:
I'll try to find the code, since that will likely be a better explanation than a long paragraph. In short, it let you read and set almost all of r->cache.

 I finally prepped the lua-cache module and put it up on github. Feedback welcome!

--
Ray Bejjani

Ron Gomes

unread,
Apr 26, 2013, 2:53:41 PM4/26/13
to openre...@googlegroups.com
Thanks for posting this, Ray; I've only just now been able to pick it up and incorporate it into an openresty build.

I've played with it a bit but I'm not sure that it provides the hooks needed to implement a generic cache-expiration feature.  What I'm looking to do is essentially to provide a variation of what the "proxy cache purge" module does, so that a page can be explicitly expired rather than deleted from the cache.  It seems that the cache information is only visible in the context of an actual request for the page; what I need to do is reach into the cache and manipulate the metadata of an arbitrary page without requesting or re-requesting it.

Am I overlooking something that the API provides?

srihari jonnalagadda

unread,
Jan 11, 2017, 10:43:24 AM1/11/17
to openresty-en
Hi,

What if we abort current running filter chain on yield and start a fresh chain after resume with transformed body?Am I missing something?

Thank you,
Srihari.J
Reply all
Reply to author
Forward
0 new messages