This is related to the number of databases you can select using SELECT.
Basically different databases are different namespaces for keys. You
can't save a single database, nor replicate a single database.
I hope to kill the feature at some point in the future. Maybe for
Redis 3.0, but I'm not sure.
Redis Cluster does not support multiple databases.
Salvatore
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele
Sorry this is not possible. You can do that using different instances
and some form of sharding.
However with Redis disk I/O is usually not a problem.
Salvatore
> I hope to kill the feature at some point in the future. Maybe for
> Redis 3.0, but I'm not sure.
> Redis Cluster does not support multiple databases.
I'd prefer if you could avoid that. I run several (up to three)
applications that use Redis on the same instance, using separate
databases. Those applications communicate (one is a client
of the other). That gives me the interesting property that when
I make a snapshot the data (cp dump.rdb) it is consistent
across applications.
If I switch to using three separate Redis instances I will lose
this property, and it will force me to write more logic to support
inconsistent snapshots or to prefix all my keys in all three
applications...
--
Pierre 'catwell' Chapuis
It's been discussed for the last 6+ months that Redis Cluster will be
doing away with databases.
> If I switch to using three separate Redis instances I will lose
> this property, and it will force me to write more logic to support
> inconsistent snapshots or to prefix all my keys in all three
> applications...
Whatever you are doing now to support 3 databases is, in any
reasonable language, not difficult to convert to prefixing keys with
database ids, assuming that your keys aren't already sufficiently
prefixed with namespaces.
For example, right now I am using databases to split up functionally
different applications. On my development environment, it all runs in
a single instance. In staging and production, they all run on
different instances. However, every application has different prefixes
on all keys by design already, so even if I didn't use different
databases, instances, etc., it still wouldn't be a problem.
Please don't tell us that you've been using <id> -> <object X> for all
three of your databases, and you've been instead doing things like
user:<id> -> <object>
Regards,
- Josiah
> Whatever you are doing now to support 3 databases is, in any
> reasonable language, not difficult to convert to prefixing keys with
> database ids, assuming that your keys aren't already sufficiently
> prefixed with namespaces.
No, doing so in application code is not hard, but it is not perfect:
* it adds a memory & network bandwidth overhead to each key
(I can live with that though);
* I have to migrate all existing data from one format to the other
(I can do it too).
> Please don't tell us that you've been using <id> -> <object X> for
> all
> three of your databases, and you've been instead doing things like
> user:<id> -> <object>
I'm doing this but I still don't like the idea of having long
namespaces
too much. I would agree that removing DBs is a good idea if Redis had
embedded hashes support, ie. if instead of this:
user:1 -> Set( name -> "foo", email -> "f...@example.com" )
user:1:friends -> List( 5, 19, 21 )
user:2 -> Set( name -> "bar", email -> "b...@example.com" )
user:2:friends -> List( 6, 10 )
you could have this:
user -> Set(
1 -> Set(
name -> "foo",
email -> "f...@example.com",
friends -> List( 5, 19, 21 ),
),
2 -> Set(
name -> "bar",
email -> "b...@example.com",
friends -> List( 6, 10 ),
),
)
Since it doesn't I find that having some means to separate data like
what DBs provide is useful, even if it's limited.
--
catwell
Unless you are running a few bytes under the 1500 byte ethernet frame
limit, or your many millions of keys were prematurely optimized to be
very short, it is *very unlikely* you will notice the memory or
network bandwidth change.
> * I have to migrate all existing data from one format to the other
> (I can do it too).
>
>> Please don't tell us that you've been using <id> -> <object X> for all
>> three of your databases, and you've been instead doing things like
>> user:<id> -> <object>
>
> I'm doing this but I still don't like the idea of having long namespaces
> too much. I would agree that removing DBs is a good idea if Redis had
> embedded hashes support, ie. if instead of this:
Don't get me wrong, like you, I don't *like* long namespaces. But I do
make sure to prefix every key I insert into every database with a 3-4
byte identifier for what part of what application it is related to,
followed by a colon, then whatever the key would have been otherwise.
In the 100 million keys I have spread over the dozen or so databases,
that amounts to under 500 megs among the 30-40 gigs that are being
used, or under 2% overall memory overhead.
> user:1 -> Set( name -> "foo", email -> "f...@example.com" )
> user:1:friends -> List( 5, 19, 21 )
> user:2 -> Set( name -> "bar", email -> "b...@example.com" )
> user:2:friends -> List( 6, 10 )
>
> you could have this:
>
> user -> Set(
> 1 -> Set(
> name -> "foo",
> email -> "f...@example.com",
> friends -> List( 5, 19, 21 ),
> ),
> 2 -> Set(
> name -> "bar",
> email -> "b...@example.com",
> friends -> List( 6, 10 ),
> ),
> )
Presumably you mean Hash everywhere you show Set. I agree, it would be
convenient. Discussions last fall talked about this possibility, but
it was abandoned by saner minds because few people would be satisfied
with a single layer of embedding, or just the use of hashes as
namespaces. Without limitations, it would imply arbitrary nesting
(hashes inside lists, lists inside of hashes inside of hashes one
layer deeper than you show, etc.), and either an explosion of
commands, or sequences of "dig into object X as a sort of new
namespace". Redis as a tool is better this way, even if it discards
this particular convenience for the sake of simplicity.
> Since it doesn't I find that having some means to separate data like
> what DBs provide is useful, even if it's limited.
Absolutely; many of us do the same thing. But since you aren't using
Redis Cluster today, or the non-clustered version of Redis that
removes databases, you'd have to migrate down the line to be affected
by it. Between now and then, there is time to migrate your data to
include prefixes, or for you to decide to stick with the version of
Redis that works good enough for your application (we're on a 6-9
month Redis upgrade cycle for this reason).
Regards,
- Josiah
> Unless you are running a few bytes under the 1500 byte ethernet frame
> limit, or your many millions of keys were prematurely optimized to be
> very short, it is *very unlikely* you will notice the memory or
> network bandwidth change.
I do have millions of keys optimized to be reasonably short. Because
of the way I use Redis, most of the size of my requests is composed of
key names. If you want an idea of the kind of requests I'm using,
check this out: https://gist.github.com/951352
> Don't get me wrong, like you, I don't *like* long namespaces. But I
> do
> make sure to prefix every key I insert into every database with a 3-4
> byte identifier for what part of what application it is related to,
> followed by a colon, then whatever the key would have been otherwise.
> In the 100 million keys I have spread over the dozen or so databases,
> that amounts to under 500 megs among the 30-40 gigs that are being
> used, or under 2% overall memory overhead.
Yes, I do that in each of my applications, I just have to add a global
prefix for each application to make sure they will be happy together
in the same flat keyspace.
> Presumably you mean Hash everywhere you show Set.
Yes.
> I agree, it would be
> convenient. Discussions last fall talked about this possibility, but
> it was abandoned by saner minds because few people would be satisfied
> with a single layer of embedding, or just the use of hashes as
> namespaces. Without limitations, it would imply arbitrary nesting
> (hashes inside lists, lists inside of hashes inside of hashes one
> layer deeper than you show, etc.), and either an explosion of
> commands, or sequences of "dig into object X as a sort of new
> namespace". Redis as a tool is better this way, even if it discards
> this particular convenience for the sake of simplicity.
Or you could just define a separator (let's say ":"...) and keep the
same command set as now:
HSET user:1:name "foo"
(internally) => (IN_HASH user (IN_HASH 1 (SET name foo)))
LPUSH user:1:friends 6
(IN_HASH user (IN_HASH 1 (LPUSH friends 6)))
It would break compatibility with existing code that uses keys that
contain the chosen separator but it would be such an improvement that
I think it is worth it.
Another way to do it is explicitly with a new command:
HLEVEL 2 user 1 SET foo
HLEVEL 2 user 2 LRANGE 0 -1
The point is that we're all doing that in our client libs, and it could
be more efficient if the server could understand it.
> Absolutely; many of us do the same thing. But since you aren't using
> Redis Cluster today, or the non-clustered version of Redis that
> removes databases, you'd have to migrate down the line to be affected
> by it. Between now and then, there is time to migrate your data to
> include prefixes, or for you to decide to stick with the version of
> Redis that works good enough for your application (we're on a 6-9
> month Redis upgrade cycle for this reason).
Of course I can do this. Migrating my data is not really a problem for
me, but I used to find the fact that Redis provides something else than
flat namespaces nice, so if we remove DBs (which admittedly was a
clumsy
way to do it) it would be cool to replace them with something that
works
better.
Now I don't know how hard it would be to make these nested structures
work with Redis Cluster and so on, but the OP's message just go me
thinking. Maybe I'll try something similar myself and see how well it
goes.
--
Pierre 'catwell' Chapuis