estimating maxmemory accurately in MSOpenTech port of Redis

25 views
Skip to first unread message

abhijit damle

unread,
Jun 28, 2015, 6:02:13 PM6/28/15
to redi...@googlegroups.com
What is reasonably accurate way of estimating what the maxmemory setting should be for an application which is supposed to be running 24/7 all year round? For some reason I set a particular value in config file when Redis server started but over period of time it got full and my application started getting Error: OOM command not allowed when used memory > 'maxmemory', is there a way to increase this limit in runtime, using ServiceStack.Redis client side api, without needing to stop and re-start Redis? Also when such error crops up is there a way to purge some keys from memory of Redis (without removing them from .rdb file) automatically as a result of policy setting somewhere in config or in Redis?

Josiah Carlson

unread,
Jun 29, 2015, 5:33:22 PM6/29/15
to redi...@googlegroups.com
Replies inline...

On Sun, Jun 28, 2015 at 3:02 PM, abhijit damle <abhiji...@gmail.com> wrote:
What is reasonably accurate way of estimating what the maxmemory setting should be for an application which is supposed to be running 24/7 all year round?

That 100% depends on the application. You are going to need to provide more details on your application and current dataset for anyone to answer this at all.
 
For some reason I set a particular value in config file when Redis server started but over period of time it got full and my application started getting Error: OOM command not allowed when used memory > 'maxmemory', is there a way to increase this limit in runtime, using ServiceStack.Redis client side api, without needing to stop and re-start Redis?

In the official version of Redis server for *nix, you can run the command "CONFIG SET maxmemory <bytes>". You can also get the current value with "CONFIG GET maxmemory" . I don't know if that is supported in the MSOpenTech port. But before you do that, you'll also want to know whether you have enough memory on the box, because if Redis starts swapping to disk... you're going to have a bad time.
 
Also when such error crops up is there a way to purge some keys from memory of Redis (without removing them from .rdb file) automatically as a result of policy setting somewhere in config or in Redis?

If you delete a key from Redis, it will be removed from the .rdb snapshot the next time Redis does the snapshot.

Data in the snapshot is not read at all by Redis except during startup, so I don't know why you would want/care about whether the key gets deleted from the .rdb, because it's not like Redis will get that data back if Redis is otherwise functioning normally.

If you want Redis to automatically delete keys when you hit a certain memory limit, you should set your 'maxmemory-policy' similar to how you set your maxmemory. You can see your options and the details of those settings:

 - Josiah
 

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages