How to save memory from unpopular/cold Redis?

127 views
Skip to first unread message

Michael Scofield

unread,
May 6, 2020, 3:35:26 AM5/6/20
to Redis DB

We have a lot of Redis instances, consuming TBs of memory and hundreds of machines.


With our business activities goes up and down, some Redis instances are just not used that frequent any more -- they are "unpopular" or "cold". But Redis stores everything in memory, so a lot of infrequent data that should have been stored in cheap disk are occupying expensive memory.


We are exploring a way to save the memory from these unpopular/cold Redis, as to reduce our machines usage.


We cannot delete data, nor can we migrate to other database. Are there some way to achieve our goals?


PS: We are thinking of some Redis compatible product that can "mix" memory and disk, i.e. it stores hot data in memory but cold in disk, and USING LIMITED RESOURCES. We know RedisLabs' "Redis on Flash(ROF)" solution, but it uses RocksDB, which is very memory unfriendly. What we want is a very memory restrained product. Besides, ROF is not open source :(


Thanks in advance!

Benjamin Sergeant

unread,
May 6, 2020, 10:51:51 AM5/6/20
to redi...@googlegroups.com
How big are your keys ?
If they are relatively big you could try to compress some if they are large (with zlib, or the plethora of new compression algorithm (lz4, zstandard, etc...). Some new algorithm can be trained on your data-set, if it is coherent. That could be a cool redis module btw.
For small keys someone you know wrote this a while ago :) -> https://github.com/antirez/smaz

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/redis-db/3ae0666c-5ebd-49fa-82a5-ea5ab7e97eca%40googlegroups.com.

Michael Scofield

unread,
May 6, 2020, 10:35:01 PM5/6/20
to Redis DB
Thanks for your reply. We have large and small keys, both a lot. Compression certainly helps, but they will still occupying memory. We would like to trade keys' retrieve speed for memory volume, so we are more in for a solution that stores keys in disk.

在 2020年5月6日星期三 UTC+8下午10:51:51,Benjamin Sergeant写道:
To unsubscribe from this group and stop receiving emails from it, send an email to redi...@googlegroups.com.

James Harbal

unread,
May 6, 2020, 11:25:48 PM5/6/20
to Redis DB
what kind of keys/values do you have?
Have you considered Elasticsearch?

Benjamin Sergeant

unread,
May 6, 2020, 11:30:46 PM5/6/20
to redi...@googlegroups.com
Just found something called ZRAM to compress memory at the kernel level  https://wiki.archlinux.org/index.php/Improving_performance#Zram_or_zswap

macos has done that for a while. Obviously I have no idea on how to work/operate such a thing :)

-- 
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/redis-db/1e35ef66-8976-4bee-ad0b-eacb016ceec2%40googlegroups.com.

James Harbal

unread,
May 6, 2020, 11:39:09 PM5/6/20
to Redis DB
ideawu/ssdb might work, depending on your version, i've never used it.

I was able to significantly reduce memory usage on my redis's TBs of data with a custom compressor, but your keys/values would have to be a lot of integers or small ints to make it worth it, or if you have strings that are similar in nature to eacother, and it'd have to be somehow reindexed and your write/access apis would need to have an extra layer of compression/decompression.


On Wednesday, May 6, 2020 at 2:35:26 AM UTC-5, Michael Scofield wrote:

Michael Scofield

unread,
May 6, 2020, 11:53:07 PM5/6/20
to Redis DB
We have strings, hash, set and zset. No, we do not consider ES, since we cannot change our database.

在 2020年5月7日星期四 UTC+8上午11:25:48,James Harbal写道:

Greg Andrews

unread,
May 7, 2020, 1:56:20 AM5/7/20
to Redis DB
I'm having a little trouble understanding what you mean by, "we cannot change our database."  If there were a version of Redis that supported saving idle keys on disk, you would still have to save your existing Redis's data and load it into the new version.  This would require a brief outage as the old memory-only Redis was stopped and the new memory-and-disk Redis was configured to listen on port 6379.  Or even change DNS so the clients would connect to a different server running the new Redis.  Is this kind of outage the thing you are saying you cannot do?

Michael Scofield

unread,
May 7, 2020, 2:08:51 AM5/7/20
to Redis DB
Sorry, what I meant was we cannot change to another database that don't compatible with Redis command protocol, like Mysql, Mongo or ElasticSearch.
If there were some db that can accept Redis command while save keys in disk, we are more than happy to use it. We can tolerate some outage duration, or DNS change or something else.

在 2020年5月7日星期四 UTC+8下午1:56:20,Greg Andrews写道:
Reply all
Reply to author
Forward
0 new messages