Lua script caching control?

745 views
Skip to first unread message

Readis

unread,
Mar 13, 2013, 8:13:19 PM3/13/13
to redi...@googlegroups.com
Hi,

Since Redis always caches the script been run forever to allow evalsha, one concern I have is that there may be an accidental growth of memory usages caching them.

For example, one may generate a Lua script to send it to the server, like

    argv = []
    lua = "...; if .. then /* some atomic/transactional and condition checking operations */";
    i = 1;
    for (entry in list of data) {
       argv << entry.value
       lua += "redis.call('lpush', 'datalist', ARGV[#{i}]); " // ARGV[1], ARGV[2], ...
   } 
   lua += "end;"; 
   redis.eval(lua, argv);

Once such a script is run against a large data set, it would be cached permanently. There is no way to opt-out even if we never use evalsha.

I think a more prudent approach would be to allow a setting for limited script cache and the codes can always do a fallback

    try {
       redis.evalsha(myScriptSHA)
    catch (e) {
      if (e.message ~= /No such SHA) // there should be some fixed error symbol instead of message text
        redis.eval(myScript)
   }

This can be a DB setting default to unlimited so it will be backward compatible.



Josiah Carlson

unread,
Mar 14, 2013, 3:07:27 AM3/14/13
to redi...@googlegroups.com
On Wed, Mar 13, 2013 at 5:13 PM, Readis <winson...@gmail.com> wrote:
> Hi,
>
> Since Redis always caches the script been run forever to allow evalsha, one
> concern I have is that there may be an accidental growth of memory usages
> caching them.

If the total volume of the code you generate is comparable to the size
of your data, I would suggest that you might be doing it wrong.

> For example, one may generate a Lua script to send it to the server, like
>
>> argv = []
>> lua = "...; if .. then /* some atomic/transactional and condition
>> checking operations */";
>> i = 1;
>> for (entry in list of data) {
>> argv << entry.value
>> lua += "redis.call('lpush', 'datalist', ARGV[#{i}]); " // ARGV[1],
>> ARGV[2], ...
>> }
>>
>> lua += "end;";
>>
>> redis.eval(lua, argv);
>>
> Once such a script is run against a large data set, it would be cached
> permanently. There is no way to opt-out even if we never use evalsha.

Might I suggest that you instead implement individual functions that
can test a given class of pre-conditions, load them into Redis with
'script load', then *call* them *within* Redis via f_<sha>(...)
(thanks to Fritzy http://redisconf.com/video/nathan-fritz, jump to 9
minutes in). You may also be able to use redis.call('evalsha', <sha>,
...), but test it.

That would let you pass a list of attributes/data to validate against,
which you can then test within Redis, *without* needing code
generation.

> I think a more prudent approach would be to allow a setting for limited
> script cache and the codes can always do a fallback
>
> try {
> redis.evalsha(myScriptSHA)
> catch (e) {
> if (e.message ~= /No such SHA) // there should be some fixed error
> symbol instead of message text
> redis.eval(myScript)
> }
>
> This can be a DB setting default to unlimited so it will be backward
> compatible.

You can also clear the cache at any time with "SCRIPT FLUSH" if the
INFO output for used_memory_lua is too high, if you find the alternate
methods I offered to be insufficient. Given the fallback method you
offered (which everyone uses, as far as I can tell, though the first
pass usually includes a 'script load' to cache the sha), this is
likely the quickest and easiest way for you to address your own issue
without relying on Salvatore to change Redis.

Regards,
- Josiah

Readis

unread,
Mar 14, 2013, 12:59:29 PM3/14/13
to redi...@googlegroups.com
Thanks. Such a long script shouldn't exist; I just concern about bugs creeping in. Looks like running SCRIPT FLUSH once awhile in a background thread can defend proactively if need to.

Josiah Carlson

unread,
Mar 14, 2013, 2:04:30 PM3/14/13
to redi...@googlegroups.com
On Thu, Mar 14, 2013 at 9:59 AM, Readis <winson...@gmail.com> wrote:
> Thanks. Such a long script shouldn't exist; I just concern about bugs
> creeping in. Looks like running SCRIPT FLUSH once awhile in a background
> thread can defend proactively if need to.

But you mentioned that it wasn't about just one script, it was about
*many* scripts being generated. Either way, as long as you're happy :)

- Josiah
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to redis-db+u...@googlegroups.com.
> To post to this group, send email to redi...@googlegroups.com.
> Visit this group at http://groups.google.com/group/redis-db?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Winson Quock

unread,
Mar 14, 2013, 3:35:44 PM3/14/13
to redi...@googlegroups.com
Both many scripts being generated or one script with long steps for each input item. Somebody may accidentally write codes that generate such scripts. Hopefully not, just wanna be cautious. :-)

You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/2lA10JOA6Gg/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

Readis

unread,
Mar 22, 2013, 3:59:16 PM3/22/13
to redi...@googlegroups.com
Actually I do have some real concerns directly from memory profiling of a server running in a restricted test environment.

We periodically run redis' info command and send the used_memory_* stats over to a performance profiling tool (NewRelic.) All scripts we generated, using a helper class to abstract the code generation, are using ARGV input for both keys and values rather than hard-coding them; we have some codes generated in a loop; but the loop is over a predefined list of symbol names, rather than unbounded data.

As seen in this diagram,
  1. the used_memory_lua grew over to 280MB; that would indicate a leak, but ...
  2. right before I did my performance test at 10am, I forced the app server to clean up previous data which involves deleting keys. the used_memory metric dropped as expected, but the the used_memory_lua metric also suddenly dropped without running any SCRIPT FLUSH command either from console or the app (I checked that there isn't script flus in the codes.)
  3. then I started shooting massive loads, the used_memory grew dramatically as expected; the lua memory, however, also grows to 40MB and stay stable.
  4. the app's automatic clean up kicked in at 12pm, reducing the used_memory, but not quite the lua memory yet. this clean up ran in batches and take a few rounds to complete
  5. then I ran a SCRIPT FLUSH, the lua memory goes to near zero (~40K)
  6. I shoot another round of loads to the server and watched the lua memory growing as the app runs
Why would the lua memory usage drop by itself? What does used_memory_lua measure exactly?

Thanks



On Thursday, March 14, 2013 12:35:44 PM UTC-7, Readis wrote:
Both many scripts being generated or one script with long steps for each input item. Somebody may accidentally write codes that generate such scripts. Hopefully not, just wanna be cautious. :-)

redis-mem-lua-profiling.png

Geerten van Meel

unread,
Mar 22, 2013, 8:47:16 PM3/22/13
to redi...@googlegroups.com
280 mb of lua memory is horrid. I cannot really contribute to this discussion, but I am very curious about the order of magnitude of the size (lines of code in average) and amount of scripts, if you can share that.

Readis

unread,
Mar 23, 2013, 1:43:58 AM3/23/13
to redi...@googlegroups.com
I found out. While most of the script steps are correctly generated, the lock checking step didn't use the ARGV convention, hard-code the timestamp and this wasn't detected by our unit tests because we test only generation of script through the helper API! Once that's fixed. The used_memory_lua stays around 70K. The metric does go up and down slightly over time though.
Reply all
Reply to author
Forward
0 new messages