The config file parses does an atoi() on the string, should I expect to be able
to use INT_MAX databases, or is there another limit somewhere?
--J.
Cheers,
Pieter
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
Salvatore
Sent from my iPhone
Hello Jeremy, there are two problems about that:
1) When there are expiring keys, Redis is designed to actively expire
keys. It's not like memcached that keys are expired only when accessed
and found expired (the right approach for a cache indeed), we need to
be able to actively purge the database of expired keys. To do so, one
time every 100 milliseconds, we sample all the DBs searching for
expired keys.
If this DBs are 100000 this will block Redis for a few milliseconds
every 100 milliseconds, that is not a good idea indeed.
2) When VM is active, we need to sample all the DBs for not recently
accessed keys, in order to swap the out if we need to free memory.
Since we can't be biased towards a given DB, a few keys of every
single DB are sampled, and the best candidate swapped out.
1000 DBs + expire are probably still ok, but 10k or 100k DBs are going
to cause troubles. I think Redis should exit with an error at all if
the user is trying to configure more than 1024 databases are this is
not going to work well anyway.
Cheers,
Salvatore
--
Salvatore 'antirez' Sanfilippo
http://invece.org
"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay
Don't make artificial limits, never. Why 1024? Why not 16? You won't improve anything that way.
On Jun 7, 2010 7:45 PM, "Salvatore Sanfilippo" <ant...@gmail.com> wrote:
On Mon, Jun 7, 2010 at 5:38 PM, Jeremy Zawodny <Jer...@zawodny.com> wrote:
> Really? Why's that?
> ...
Hello Jeremy, there are two problems about that:
1) When there are expiring keys, Redis is designed to actively expire
keys. It's not like memcached that keys are expired only when accessed
and found expired (the right approach for a cache indeed), we need to
be able to actively purge the database of expired keys. To do so, one
time every 100 milliseconds, we sample all the DBs searching for
expired keys.
If this DBs are 100000 this will block Redis for a few milliseconds
every 100 milliseconds, that is not a good idea indeed.
2) When VM is active, we need to sample all the DBs for not recently
accessed keys, in order to swap the out if we need to free memory.
Since we can't be biased towards a given DB, a few keys of every
single DB are sampled, and the best candidate swapped out.
1000 DBs + expire are probably still ok, but 10k or 100k DBs are going
to cause troubles. I think Redis should exit with an error at all if
the user is trying to configure more than 1024 databases are this is
not going to work well anyway.
Cheers,
Salvatore
--
Salvatore 'antirez' Sanfilippo
http://invece.org
"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay
--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To post ...
1024 -> the limit at witch you start seeing serious unexpected
performance degradation and measurable "stop the world" moments (with
the right tools...). Everything over this is broken anyway.
Cheers,
Salvatore
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.
>
--
Perhaps it, at least, has nothing to do with power of two?
> Perhaps it, at least, has nothing to do with power of two?
Indeed, professional aberration :)
It's just a matter of order of magnitude. 1000 DBs per 10 steps
operation (or even more) per DB per serverClock() cycle == delay that
starts to be noticeable.
One order of magnitude more: 10k DBs -> out of limit at all.
One order of magnitude less: 100 DBs -> ok for all the usages.
Cheers,
Salvatore
--
Just to explain the question in more detail — I'm building a versioned cache. Since I couldn't find a good way to delete all keys matching a certain pattern and since I might need to delete them before they expire, I thought I'd version the cache using database numbers.
Basically, each cache version would get a database, and I'd use FLUSHDB on the previous generation to immediately get rid of the obsolete values.
I needed to know about the limits and tradeoffs related to the number of databases. It seems my solution is workable, as I won't need more than 30-50 databases. The only slight annoyance is that I have to implement a „database allocator” that will map cache versions to database numbers.
--J.