Fragmentation contest

304 views
Skip to first unread message

Salvatore Sanfilippo

unread,
Jan 8, 2011, 10:32:56 AM1/8/11
to Redis DB
Hello!

this is the Redis fragmentation context :)

The winner is the first to post in this thread (accordingly to google
group web interface message date) a simple script that can lead to
fragmentation.
The prize is a Redis shirt with the new logo and the tagline "all the
allocations are belong to me".

Rules:

- The program must be 200 lines of code or less
- The program must be written in Ruby, Python, Perl, Tcl, or C, using
just your preferred Redis library as dependency (together with any
other code lib of the used language).
- The program must show to be able to fragment memory. This means once
run, after at max 1 hours the peak memory usage must be reached (as
reported by INFO used memory field), but the RSS will start to grow.
In Redis 2.2 this will be very evident as the fragmentation_ratio
field of INFO will start to grow more and more.
- The RSS grow should continue for at least 24 hours since the first
time the program is executed.
- The fragmentation should be evident running the program against a
Linux system with standard libc malloc OR against mac os x snow
leopard with the default malloc implementation. No need to test the
program in both the architectures as long as you are able to fragment
memory with one OS.
- The winner will be the first poster that will provide a program able
to fragment Redis memory. Even if another program will be the first to
be *verified* to work, the winner will be the first program that was
posted and actually works.

This is an example problem that DOES NOT produce fragmentation, just
as an example:

require 'rubygems'
require 'redis'

r = Redis.new

while true
puts "."
zsetname = "zset-#{rand(100)}"
1000.times {
r.zadd(zsetname,rand(),rand(1000))
r.setex(rand(1000000000),60,".")
}
zsetname = "zset-#{rand(100)}"
r.del zsetname
end

Cheers,
Salvatore

--
Salvatore 'antirez' Sanfilippo
open source developer - VMware

http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele

Salvatore Sanfilippo

unread,
Jan 8, 2011, 10:47:46 AM1/8/11
to Redis DB
Forgot one thing, memory fragmentation should be apparent against a
Redis instance running without virtual memory and in general with
default parameters, without a specific redis.conf file.

Wenhui Liu

unread,
Jan 8, 2011, 12:01:56 PM1/8/11
to redi...@googlegroups.com
import redis
import hashlib

r = redis.Redis(host='localhost',port=6379,db=0)
i=1
m = hashlib.sha1()
while True :
        m.update(str(i))
        t = m.hexdigest()
        r.set(t,t)
        r.keys('*')
        i=i+1

After running this code,redis gives no response to me anymore :-p
Even if  i type
redis > PING

My environment
        Debian 5.02 kernel 2.6.26-2-686
        redis 2.2.0 from git master
        2G ddr2667 Ram
        Intel Core Dual T7400 2.16GHz
        Cpu usage:  1.7%us,  4.7%sy,  0.0%ni, 48.0%id, 44.6%wa,  0.9%hi,  0.1%si,  0.0%st
        Memory free: 4096k
        Swap free:12k
        

Forgive me for my bad code :-)


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To post to this group, send email to redi...@googlegroups.com.
To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.




--
Regards,
Wenhui Liu
I'm working now!

Salvatore Sanfilippo

unread,
Jan 8, 2011, 12:03:59 PM1/8/11
to redi...@googlegroups.com
Hello Wenhui,

the goal is not to block Redis :) But to fragment memory, that is very
different ;)

Cheers,
Salvatore

albert

unread,
Jan 8, 2011, 2:00:11 PM1/8/11
to Redis DB
Hi,

I have never worked with Redis before but found this an interesting
challenge. :)

I'm not sure if my solution is a valid one (I'm exploiting Redis
fragmentation info a bit) but it may be a good idea at how to produce
fragmentation in a general way.

Here it is:

---

import redis, random
r = redis.Redis()
r.flushall()
N = 100
S = 1000

print "filling up..."
for i in xrange(0, N):
zsetname = "zset-" + str(i)
for j in xrange(0, S):
r.zadd(zsetname, "#"*j, j % 11)
print "done, playing around now."

def printstate():
info = r.info()
print "mem frag:", info['mem_fragmentation_ratio'], ", mem:",
info['used_memory_human'], info['used_memory_rss']

def update(zsetname, oldnewlist):
for oldvalue,newvalue in oldnewlist:
score = r.zscore(zsetname, oldvalue)
if score != None: score = float(score)
else: score = random.randint(0,11)
r.zrem(zsetname, oldvalue)
r.zadd(zsetname, newvalue, score)

def update_fragratio(zsetname, oldnewlist):
for oldvalue,newvalue in oldnewlist:
update(zsetname, [(oldvalue, newvalue)])
fragratio = float(r.info()['used_memory_rss']) / float(r.info()
['used_memory'])
for oldvalue,newvalue in oldnewlist:
update(zsetname, [(newvalue, oldvalue)])
return fragratio

def random_string():
s = ""
for i in xrange(random.randint(0, 1000)):
s += random.choice("#ABC01")
return s

def random_oldnewlist(zsetname):
l = []
for i in xrange(0,10):
index = random.randint(0, r.zcard(zsetname) - 1)
oldvalue, = r.zrange(zsetname, index, index)
newvalue = random_string()
l += [(oldvalue, newvalue)]
return l

while True:
for i in xrange(0, N):
zsetname = "zset-" + str(i)

possible_ops = []
for j in xrange(0,10):
l = random_oldnewlist(zsetname)
possible_ops += [(update_fragratio(zsetname, l), l)]

_,best_ops = max(possible_ops)
update(zsetname, best_ops)

print i,
printstate()

---

I'm running this for only a few minutes until now and the
fragmentation ratio seems to be monotonically increasing.

I hope, while this may not be a valid solution, I still could
contribute a bit to the test. :)

Cya,
Albert

pn

unread,
Jan 8, 2011, 2:13:48 PM1/8/11
to Redis DB
This is my first stab at it, after warming up memory grows very slowly
but fragmentation keep an upward trend.

require 'rubygems'
require 'redis'
KEY_RANGE = 200_000
redis = Redis.new()

def next_value(fetched_value)
return "bar" * rand(30) if !fetched_value
if rand(2) == 1
fetched_value[0..(fetched_value.size / 2)]
else
fetched_value + "foo" * rand(fetched_value.size * 2)
end
end

loop do
key = rand(KEY_RANGE)
hash_key = 'baz' * (rand(20) + 1)
value = redis.hget(key, hash_key)
dice = rand
if value && dice <= 0.5
if dice <= 0.3
redis.del(key)
else
redis.hdel(key, hash_key)
end
else
redis.hset(key, hash_key, next_value(redis.hget(key, hash_key)))
end
end

Paolo

Tim Lossen

unread,
Jan 8, 2011, 5:10:30 PM1/8/11
to redi...@googlegroups.com
ok, here is my attempt. not sure if it really causes
fragmentation -- but i extracted the sorted set code that
i suspect as being the culprit, and tried to mimic our
access pattern:


require 'redis'

$redis = Redis.new

def random_id
rand(10000000)
end

def store(id)
now = Time.now.to_i
$redis.zremrangebyscore('online', -1 * (now - 60), 0)
$redis.zadd('online', -1 * now, id)
end

pool = Array.new(1000).map { random_id }
count = 0
loop do
index = rand(pool.size)
if rand(200) == 0
pool[index] = random_id
else
store(pool[index])
end
count += 1
puts "#{$redis.zcard('online')}" if count % 1000 == 0
end

> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>

--
http://tim.lossen.de

Jak Sprats

unread,
Jan 10, 2011, 6:02:57 AM1/10/11
to Redis DB

On the topic of fragmentation (and yes, this idea is a bit out there).

Redis malloc's in two different ways
1.) operational malloc()ing - parsing the request, sending responses,
etc...
2.) storage malloc()ing - when storing data that just sits in RAM
(SET,LPUSH,ZADD,etc..)

The difference between the two is that #1 usually gets malloc/freed w/
in a request, whereas #2 remains in RAM for as long as the data
persists.

So my (out there) idea was to have to functions (ozmalloc, szmalloc in
the code) that malloc using different malloc-stores (e.g.
malloc,tcmalloc).

The point is, all of the operation mallocing (#1) could not possibly
fragment the much more important persistent storage mallocing (#2).

I have no idea, if this works in practice, it is just an idea I have
always had to reduce malloc fragementation.

Santiago Perez

unread,
Jan 10, 2011, 6:14:53 AM1/10/11
to redi...@googlegroups.com
That would also allow for separate allocation counters to narrow the search in cases of discovering memory leaks. 

Pieter Noordhuis

unread,
Jan 10, 2011, 6:27:48 AM1/10/11
to redi...@googlegroups.com
Hi Jak,

The main problem with this is that it requires a memcpy to copy arguments between allocators. Currently, when a command is parsed, every argument is a Redis string object. When it is a SET, the key and value arguments propagate to the store without having to do a memcpy. We've cut down a lot on the malloc/free patterns for the response (since every client has a static buffer of about 8k in 2.2), so that would leave the argument parsing. It is possible to move some parts of it to a different allocator, but that would be adding too much complexity for too little gain (I guess), since in most cases this only affects the command itself and from time to time the key argument. My guess (untested, but very likely) is that doing a DEL on a large zset causes way more fragmentation than the rapid malloc/free patterns that come from command arguments (where even the arguments that are used in the store later on get a malloc, but no free).

Even for small values, the added memcpy for moving data between allocators will have a big performance penalty since it will no longer be a malloc, but a malloc/malloc/memcpy/free.

That said, ideas like this help to think about the strategies that are currently used and how they can be improved, so thanks for thinking outside the box!

Cheers,
Pieter

Jak Sprats

unread,
Jan 11, 2011, 2:10:33 AM1/11/11
to Redis DB
Hi Pieter,

my view of the C code is more redis 2.0 than redis 2.2, so if you have
cut the middleman out of the response parsing to object saving, then
nice work.

>> We've cut down a lot on the malloc/free patterns for the response (since every client has a static buffer of about 8k in 2.2)
A subtle bug huge improvement, again very very cool. This is a clear
sign on redis' maturity, going back and cleaning and polishing.

Just to be pendantic, (and this is the complexity [for little gain]
you were refering to), argument parsing could be done so that when a
write-op were encountered (e.g. "*3\r\n\$3\r\nSET\r\n" and not on "*3\r
\n\$3\r\nGET\r\n") then the following arguments (e.g. "$3\r\nKEY\r\n
$5\r\nVALUE\r\n" - the "KEY" and "VALUE") could be parsed into the
storage-malloc-store. But w/ pipeline parsing and all, this may be
complex, but maybe it isn't.

>> My guess (untested, but very likely) is that doing a DEL on a large zset causes way more fragmentation than the rapid malloc/free patterns that come from command arguments

This is where I am unsure, I have a gut feeling (hey one of us should
just test this:) that rapid malloc/frees can have all sorts of corner
cases that can lead to memory fragmentation (e.g. when they happen
close to page borders) and they must eventually lead to some sort of
malloc-store rebalancing. Also rapid malloc/frees happen on every
command (GET, LPOP, etc...). But, I dont know any of this, just
guessing.

Its a strange suggestion, I can not find evidence of anyone doing this
dual malloc-store approach (even doing it and saying it was dumb), so
I figured I would throw it out there and see if anyone had feedback on
it.

Again, very nice work (to you and Salvatore) on the memory
optimisations in 2.2, people are gonna love the results, but not
recognize how brutal the refactoring work to get there was :)

- Jak

On Jan 10, 4:27 am, Pieter Noordhuis <pcnoordh...@gmail.com> wrote:
> Hi Jak,
>
> The main problem with this is that it requires a memcpy to copy arguments
> between allocators. Currently, when a command is parsed, every argument is a
> Redis string object. When it is a SET, the key and value arguments propagate
> to the store without having to do a memcpy. We've cut down a lot on the
> malloc/free patterns for the response (since every client has a static
> buffer of about 8k in 2.2), so that would leave the argument parsing. It is
> possible to move some parts of it to a different allocator, but that would
> be adding too much complexity for too little gain (I guess), since in most
> cases this only affects the command itself and from time to time the key
> argument. My guess (untested, but very likely) is that doing a DEL on a
> large zset causes way more fragmentation than the rapid malloc/free patterns
> that come from command arguments (where even the arguments that are used in
> the store later on get a malloc, but no free).
>
> Even for small values, the added memcpy for moving data between allocators
> will have a big performance penalty since it will no longer be a malloc, but
> a malloc/malloc/memcpy/free.
>
> That said, ideas like this help to think about the strategies that are
> currently used and how they can be improved, so thanks for thinking outside
> the box!
>
> Cheers,
> Pieter
>
> On Mon, Jan 10, 2011 at 12:02 PM, Jak Sprats <jakspr...@gmail.com> wrote:
>
> > On the topic of fragmentation (and yes, this idea is a bit out there).
>
> > Redis malloc's in two different ways
> > 1.) operational malloc()ing - parsing the request, sending responses,
> > etc...
> > 2.) storage malloc()ing     - when storing data that just sits in RAM
> > (SET,LPUSH,ZADD,etc..)
>
> > The difference between the two is that #1 usually gets malloc/freed w/
> > in a request, whereas #2 remains in RAM for as long as the data
> > persists.
>
> > So my (out there) idea was to have to functions (ozmalloc, szmalloc in
> > the code) that malloc using different malloc-stores (e.g.
> > malloc,tcmalloc).
>
> > The point is, all of the operation mallocing (#1) could not possibly
> > fragment the much more important persistent storage mallocing (#2).
>
> > I have no idea, if this works in practice, it is just an idea I have
> > always had to reduce malloc fragementation.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Redis DB" group.
> > To post to this group, send email to redi...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > redis-db+u...@googlegroups.com<redis-db%2Bunsu...@googlegroups.com>
> > .

albert

unread,
Jan 31, 2011, 12:23:52 AM1/31/11
to Redis DB
Hi,

I am curious: Have the given suggestions been tested? Should I try to
improve my own suggestion?

I want that Redis shirt! :)

Cheers,
Albert

Salvatore Sanfilippo

unread,
Jan 31, 2011, 3:17:06 AM1/31/11
to redi...@googlegroups.com
On Mon, Jan 31, 2011 at 6:23 AM, albert <alb...@googlemail.com> wrote:
> Hi,
>
> I am curious: Have the given suggestions been tested? Should I try to
> improve my own suggestion?
>
> I want that Redis shirt! :)

Hello Albert!

I tested briefly every implementation submitted, but just for a few
minutes, without fragmentation,
but before to declare every entry not good I want to test each one for
longer time in the course of this week.

So stay tuned ;)

Cheers,
Salvatore

> Cheers,
> Albert


>
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.

> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.


> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>

--

Dvir Volk

unread,
Jan 31, 2011, 3:24:23 AM1/31/11
to redi...@googlegroups.com
speaking of which, is there any way to buy a redis tshirt without winning this contest? :)

Pau Freixes

unread,
Feb 17, 2011, 4:37:55 PM2/17/11
to redi...@googlegroups.com
I belive Salvatore figured out the way to resolve this contest in this
comment http://code.google.com/p/redis/issues/detail?id=461#c1

--
--pau

Reply all
Reply to author
Forward
0 new messages